Models of cooperation based on the Prisoner's Dilemma and the Snowdrift game


  • Michael Doebeli,

    Corresponding author
    1. Department of Zoology and Department of Mathematics, University of British Columbia, 6270 University Boulevard, Vancouver, BC, Canada
    Search for more papers by this author
  • Christoph Hauert

    1. Program for Evolutionary Dynamics, Harvard University, One Brattle Square, Cambridge, MA 02138, USA
    Search for more papers by this author



Understanding the mechanisms that can lead to the evolution of cooperation through natural selection is a core problem in biology. Among the various attempts at constructing a theory of cooperation, game theory has played a central role. Here, we review models of cooperation that are based on two simple games: the Prisoner's Dilemma, and the Snowdrift game. Both games are two-person games with two strategies, to cooperate and to defect, and both games are social dilemmas. In social dilemmas, cooperation is prone to exploitation by defectors, and the average payoff in populations at evolutionary equilibrium is lower than it would be in populations consisting of only cooperators. The difference between the games is that cooperation is not maintained in the Prisoner's Dilemma, but persists in the Snowdrift game at an intermediate frequency. As a consequence, insights gained from studying extensions of the two games differ substantially. We review the most salient results obtained from extensions such as iteration, spatial structure, continuously variable cooperative investments, and multi-person interactions. Bridging the gap between theoretical and empirical research is one of the main challenges for future studies of cooperation, and we conclude by pointing out a number of promising natural systems in which the theory can be tested experimentally.


Cooperation is ubiquitous in biological systems, and so is its exploitation. Cooperation is a conundrum, whereas its exploitation is not, at least not at first sight. Cooperative entities make a sacrifice: they help others at a cost to themselves. Exploiters, or cheaters, reap the benefits and forego costs. Based on utilitarian principles – be it in the form of evolution by natural selection of the ‘fittest’ type, or in the form of ‘rational’ behaviour generating the highest payoff – exploitation should prevail, and cooperation should be rare.

Yet the history of life on Earth could not have unfolded without the repeated cooperative integration of lower level entities into higher level units. Thus, major evolutionary transitions (Maynard Smith & Szathmáry 1995), such as the evolution of chromosomes out of replicating DNA molecules, the transition from uni-cellular to multi-cellular organisms, or the origin of complex ecosystems, could not have occurred in the absence of cooperative interactions. Similarly, the emergence of complex animal and human societies requires cooperation (Maynard Smith & Szathmáry 1995; Crespi & Choe 1997; Dugatkin 1997).

Since its invention by von Neumann & Morgenstern (1944), the mathematical framework of game theory has been a central tool for understanding how cooperative entities can overcome the obvious fitness and payoff disadvantages and persist in the face of cheating and exploitation. Game theory embodies the concept of frequency dependent selection, which is at the heart of the problem of cooperation, because the actual costs of cooperation ultimately depend on the type of individuals a cooperator interacts with. Maynard Smith & Price (1973) ingeniously related the economic concept of payoff functions with evolutionary fitness, thus marking the advent of an entirely new approach to behavioural ecology that inspired numerous theoretical and empirical investigations. In particular, evolutionary game theory has been used extensively to study the problem of cooperation (Nowak & Sigmund 2004).

These attempts go back to a seminal paper by Trivers (1971), in which he introduced the notion of reciprocal altruism. This notion embodies the idea that cooperation may evolve in a context in which future behaviour may be determined by current payoffs. Reciprocal altruism was famously embedded into evolutionary game theory by Axelrod & Hamilton (1981). Their models are based on the Prisoner's Dilemma game (PD), perhaps the single most famous metaphor for the problem of cooperation (Box 1). In this game, natural selection favours defection and thereby creates a social dilemma (Dawes 1980), because when everybody defects, the mean population payoff is lower than if everybody had cooperated. In the past two decades, it has been a major goal of theoretical biology to elucidate the mechanisms by which this dilemma can be resolved.

The social dilemma of the PD can be relaxed by assuming that cooperation yields a benefit that is accessible to both interacting players, and that costs are shared between cooperators. This results in the so-called Snowdrift game (SD), which is also known as the Hawk-Dove game, or the Chicken game (Maynard Smith 1982; Sugden 1986, Box 1). Its essential ingredient is that in contrast to the PD, cooperation has an advantage when rare, which implies that the replicator dynamics (Taylor & Jonker 1978; Hofbauer & Sigmund 1998) of the SD converges to a mixed stable equilibrium at which both C and D strategies are present. Starting with Maynard Smith & Price (1973), the SD (or Hawk-Dove game) has been well studied in the context of competition and escalation in animal conflicts, but its role as a simple metaphor in the broader context of the evolution of cooperation has been much less emphasized. In spite of this, we think that the SD may actually be widely applicable in natural systems.

Here we review models of cooperation that are based on the PD and SD games. Since the dynamics of these models is easily understood (Box 1), studying suitable extensions can reveal mechanisms by which cooperation can either be enhanced or reduced as compared with the baseline models. In particular, since the PD does not allow for cooperation, any extensions that do can be viewed as representing mechanisms that promote cooperation. The essential feature of any mechanism to promote cooperation is that cooperative acts must occur more often between cooperators than expected based on population averages. Thus, there must be positive assortment between cooperative types (Queller 1985). In the PD, positive assortment can for example arise because of direct reciprocity in iterated interactions, due to spatially structured interactions, or because of indirect reciprocity with punishment and reward. We first review insights gained from such extensions about the conditions under which cooperation can thrive in models based on the PD. We then demonstrate that the same mechanisms may not always give rise to positive assortment, and hence increased cooperation, in the SD. This is surprising, because both games represent social dilemmas, and if anything, the relaxed conditions in the SD would appear to be in favour of cooperators. Nevertheless, extensions of the SD game can reveal general principles for the evolutionary dynamics of cooperation. The PD and the SD are simple mathematical models, but much has been learned from analysing these games and their extensions about one of the core problems in evolutionary biology. We conclude by outlining promising directions for further explorations of the evolution and maintenance of cooperation based on these games and their applications in empirical model systems.

Box 1: The Prisoner's Dilemma and Snowdrift games

In the PD, players can adopt either one of two strategies: cooperate (C) or defect (D). Cooperation results in a benefit b to the opposing player, but incurs a cost c to the cooperator (where b > c > 0); defection has no costs or benefits. This results in the following payoffs (Table 1a): if the opponent plays C, a player gets the reward R = b − c if it also plays C, but it can do even better and get T = b if it plays D. On the other hand, if the opponent plays D, a player gets the lowest payoff S = −c if it plays C, and it gets P = 0 if it also defects. In either case, i.e. independent of whether the opponent plays C or D, it is, therefore, better to play D. In evolutionary settings, payoffs determine reproductive fitness, and it follows that D is the evolutionarily stable strategy (ESS) (Maynard Smith 1982). This can be formalized using replicator dynamics (Taylor & Jonker 1978; Hofbauer & Sigmund 1998), which admits pure defection as the only stable equilibrium.

In the SD, cooperation yields a benefit b to the cooperator as well as to the opposing player, and incurs a cost c if the opponent defects, but only a cost c/2 if the opponent cooperates. This results in the following payoffs (Table 1b): R = b − c/2 for mutual cooperation, T = b for D playing against C, S = b − c for C playing against D, and P = 0 for mutual defection. If b > c > 0 as before, then C is a better strategy than D if the opponent plays D. On the other hand, if the opponent plays C, then D is still the best response. Thus, both strategies can invade when rare, resulting in a mixed evolutionarily stable state at which the proportion of cooperators is 1−c/(2b − c). It is important to note that in this state the population payoff is smaller than it would be if everybody played C, hence the SD still represents a social dilemma (Hauert & Doebeli 2004).

Table 1.  Payoff matrices of (a) the Prisoner's Dilemma (PD); and (b) the Snowdrift game (SD). In both cases, the benefits b exceed the costs of cooperation c (b > c > 0), which leads to the characteristic payoff ranking of PDC > PCC > PDD > PCD in the PD and PDC > PCC > PCD > PDD in the SD for the four possible pairings of C and D. Note that for high costs (2b > c > b) the SD converts to the PD.
inline image

Models of cooperation based on the Prisoner's Dilemma

The PD embodies the problem of cooperation: although individuals can benefit from mutual cooperation, they can do even better by exploiting cooperation of others. Therefore, the PD provides an interesting basis for exploring mechanisms that can either prevent exploitation or make it unprofitable, thus enabling cooperation to persist.

Iterated interactions

In the Iterated Prisoner's Dilemma (IPD), a single game consists of a number of rounds of the simple PD, which allows individuals to react to an opponent's past behaviour. If players interact repeatedly before the final tally is made, low expected payoffs in future interactions because of retaliation against current defection could render cooperation beneficial. This is the basic idea of reciprocal altruism (Trivers 1971). Repeated interactions open up a whole new world of possible strategies determining whether to cooperate or defect in the next round based on the outcome of earlier rounds. Exploring this world has been the subject of intense scrutiny by researchers in various fields, including economics, political science, biology, computer science and artificial intelligence.

Perhaps the best-studied class of strategies in the IPD are strategies that base their behaviour in round n + 1 of an interaction on what happened in round n. The most famous example of this type of strategy is ‘Tit-for-Tat’ (TFT), which consists of cooperating in the first round of the iteration, and then doing whatever the opponent did in the previous round. In the seminal computer tournaments of Axelrod (1984), the simple TFT strategy emerged as the clear winner against a range of other strategies (including very sophisticated ones), and the success of TFT was attributed to the fact that it never defects first, retaliates when the opponent defects, but forgives when the opponent reverts to cooperation. These properties generate iterated interactions that consist either mostly of CC rounds or mostly of DD rounds, and hence can be interpreted as giving rise to positive assortment between cooperative behaviours (J. Fletcher, pers. comm.). In particular, in a population of TFT players, individuals end up always cooperating, hence the success of TFT corresponds to maintenance of cooperation. However, the precise meaning of success in the IPD is somewhat ambiguous. For example, Boyd & Lorberbaum (1987) showed that no deterministic strategy is evolutionarily stable in the IPD. Moreover, TFT performs poorly in a noisy world, in which players are prone to make erroneous moves that can cause long series of low paying retaliatory behaviour.

The problem of noise can be addressed by considering probabilistic strategies. For strategies that condition the propensity to cooperate on the opponent's move in the previous round, evolutionary dynamics reveals that the probabilistic strategy Generous TFT (which retaliates only with probability 2/3) prevails in the long run, albeit only after its rise is catalysed by TFT (Nowak & Sigmund 1992). Extending the strategy space to include strategies that condition the probability to cooperate on the payoff received in the previous round, i.e. on the previous moves of the opponent as well as of the player itself, a new type of strategy, termed Pavlov, evolves (Nowak & Sigmund 1993). Pavlov implements a simple and intuitive behavioural rule: win-stay, lose-shift. It consists of repeating the previous move if that move resulted in the high PD payoffs T or R, and of switching to the opposite behaviour if the previous round resulted in the low PD payoffs P or S. Interestingly, Pavlov again relies on TFT as a catalyst of cooperation. To date, Pavlov appears to be the most consistently successful strategy in the IPD (Kraines & Kraines 2000).

Many other IPD strategies have been studied in the literature (see e.g. Sugden (1986); Brembs (1996) and Dugatkin (1997), especially Table 2.1 in the latter). In addition, there are many interesting variants and extensions of the IPD, which we can only mention briefly here. A biologically relevant alternative is obtained by considering the alternating IPD, in which players take turns in updating their behaviour (Frean 1994; Nowak & Sigmund 1994; Neill 2001). The alternating IPD tends to favour more forgiving strategies than the simultaneous IPD (Frean 1994; Nowak & Sigmund 1994), and it has been argued that the best strategies for the alternating game have a large memory, i.e. are strategies that are based on a number of previous moves (Frean 1994; Neill 2001). This seems to contrast with results for the synchronous IPD, in which increasing the memory size does not seem to significantly change the characteristics of successful strategies (Axelrod 1984; Lindgren 1991; Hauert & Schuster 1997). Other results show that cooperation is favoured if engaging in IPD interactions is optional (Batali & Kitcher 1995; see also Box 3), if there are extrinsic factors that maintain variation in behaviour (McNamara et al. 2004), or if more sophisticated strategies are considered. For example, successful strategies can exhibit an internal state that implements the idea of good and bad standing and enables strategies to deal with certain types of errors (Boerlijst et al. 1997). Internal states can also serve to implement basic forms of information processing that can lead to superior performance (Hauert & Stenull 2002).

By studying stochastic game dynamics in finite populations, Nowak et al. (2004) have recently argued that cooperation in the IPD may be enhanced by small population sizes. In another recent development, a new round of IPD tournaments was organized (see to commemorate the 20th anniversary of Axelrod's seminal work on the IPD (Axelrod 1984). This time so-called ‘colluding strategies’ emerged as the winning type. These strategies cooperate with their own type and play TFT against everyone else. In order to discriminate between self and non-self, colluding strategies exchange a secret handshake in the form of a sequence of identification moves at the beginning of each IPD encounter. However, it is not clear how such identification mechanisms would evolve in the first place, and how colluding strategies can increase in frequency when they are rare (i.e. when they do not meet their own type), but the concept of collusion may provide interesting new perspectives for use of the IPD in behavioural ecology and psychology.

Spatial PD games

Investigating the effects of spatial structure on population biological processes has been a major theme in theoretical ecology and evolution in the past two decades. In particular, it has been realized that spatial structure may be a potent promoter of cooperation. Axelrod (1984) already pointed out the potential role of spatial structure, but it was really the seminal paper by Nowak & May (1992) that spawned a large number of investigations of ‘games on grids’ (Nowak & Sigmund 2000), i.e. evolutionary games that are played in populations whose individuals occupy sites on a spatial lattice. Payoffs obtained from local interactions with neighbouring individuals are then used to update the lattice, i.e. to create subsequent generations in the evolutionary process. The propagation of successful strategies to neighbouring sites may be interpreted either in terms of reproduction, or in terms of imitation and learning (Nowak & Sigmund 2004). There are a number of different ways in which such updating procedures can be implemented with respect to individual sites (e.g. deterministic or probabilistic) and to the entire lattice (e.g. synchronous or asynchronous). Nevertheless, an unambiguous conclusion that has been reached from studies of the spatial PD is that spatial structure promotes cooperation (Nowak & May 1992, 1993; Hubermann & Glance 1993; Nowak et al. 1994a; Killingback et al. 1999). Cooperators can survive by forming clusters within which they reap the benefits from mutual cooperation and which allows them to persist despite exploitation by defectors along the cluster boundaries (Fig. 1). Thus, maintenance of cooperation in the spatially structured PD is a robust phenomenon, even though the dynamics of the spatial games can be very complicated, and even though the exact range of PD payoff parameters b and c (cf. Fig. 1a) for which cooperation can persist does depend on the update rules (Hubermann & Glance 1993; Nowak et al. 1994a; Nowak & Sigmund 2000). For an interactive on-line tutorial exploring these issues we refer to Hauert (2005).

Figure 1.

The spatial PD. (a) Fraction of cooperators as a function of the cost-to-net-benefit ratio ρ = c/(b − c). Solid (open) squares show simulation results for synchronous (asynchronous) population updates. The solid line indicates predictions from pair approximation (see main text) and the dashed line from replicator dynamics for spatially unstructured populations. For sufficiently small ρ cooperators can survive by forming compact clusters, as illustrated by a typical snapshot of the lattice configuration (b) for ρ close to the extinction threshold of cooperators. Methods: one update rule that is particularly generic in our view consists of updating a focal individual as follows: first calculate the average payoff of the focal individual from interactions with its neighbours, then pick one of the focal individual's neighbours at random and calculate that neighbour's average payoff from its interactions with its own neighbours, and finally determine whether the next occupant of the focal individual's site is the offspring of the focal individual or of the neighbour by probabilistically comparing their payoffs. For synchronous population updates all sites are updated simultaneously, whereas for asynchronous updating the focal individual is drawn at random. Asynchronous updating corresponds to modelling continuous time dynamics. In well-mixed populations, in which all individuals are neighbours of all other individuals, the rule represents an individual based implementation of the replicator dynamics (Hauert & Doebeli 2004).

Because spatial clustering implies that cooperators interact more often with their own type than expected by chance based on mean population frequencies, it is possible to interpret the effects of spatial structure on the evolution of cooperation in the context of the theory of kin selection Hamilton (1963). Box 2 discusses the connections between the spatial PD and kin selection in more detail.

The conclusion that spatial structure is beneficial for cooperation has also been reached for spatial versions of the IPD. For example, Lindgren & Nordahl (1994) showed that compared with non-spatial games, the unconditional cooperator AllC does much better in spatial IPD's and often outperforms TFT. Among deterministic strategies that condition their moves on the previous round, Pavlov is the most successful strategy in spatial IPD's (Lindgren & Nordahl 1994), just as in the non-spatial IPD. However, if probabilistic reactive strategies are considered, spatial structure favours more forgiving versions of the strategies that are successful in unstructured games (Grim 1995; Brauchli et al. 1999).

Most models of spatial evolutionary game theory can exhibit very complicated dynamics [e.g. Killingback & Doebeli (1998); Hauert (2001)], and it is, therefore, generally difficult to obtain analytical results. Some analytical results have been obtained using geometrical arguments about cluster formation (Nowak & May 1992; Killingback et al. 1999; Hauert 2001), and Schweitzer et al. (2002) recently gave a classification of the dynamic regimes in the spatial PD. Interesting phase transitions can occur between the different dynamic regimes of spatial games (Szabó & Tõke 1998; Szabó & Hauert 2002).

Perhaps the most promising approach for understanding the dynamics of lattice models analytically involves the technique of pair approximation (Matsuda et al. 1992; Ellner 2001). This deterministic approximation yields a set of differential equations describing the dynamics of spatial games based on pair correlations between nearest neighbours, while neglecting higher order terms. Pair approximation has led to fairly good agreement with results from numerical simulations in a number of different models (Dieckmann et al. 2000). Examples are shown in Fig. 1 for the spatial PD and in Fig. 2 for the spatial SD.

Figure 2.

The spatial SD. (a) Fraction of cooperators as a function of the cost-to-net-benefit ratio ρ = c/(2b − c). Solid (open) squares show simulation results for synchronous (asynchronous) population updates. The solid line indicates predictions from pair approximation and the dashed line from replicator dynamics for spatially unstructured populations. Except for very small ρ, the fraction of cooperators in the spatial setting lies below the expectations from well-mixed populations (dashed line) and for sufficiently large ρ cooperators are eliminated altogether. (b) The fact that in the SD the best action depends on the opponents move prevents the formation of compact clusters and instead leads to filament-like structures as illustrated by a typical snapshot of the lattice configuration for ρ close to the extinction threshold of cooperators.

Other analytical results have been obtained for spatial models under simplifying assumptions, e.g. for one-dimensional lattices (Eshel et al. 1999). Finally, analytical results have also been obtained by using reaction-diffusion models based on partial differential equations (Hutson & Vickers 1995; Ferrière & Michod 1996) to describe spatially structured populations. The results confirm the overall conclusion that spatial structure is beneficial for cooperation in the PD.

So far, this conclusion has been reached mainly for models in which spatial structure was incorporated by using regular square lattices, in which interactions and reproduction/imitation was limited to either the four or the eight nearest neighbours. Some recent results indicate that the lattice topology does affect the dynamics of cooperation and that, interestingly, relaxing the rigid purely local neighbourhood structure of lattices seems to benefit cooperation (Abramson & Kuperman 2001; Masuda & Aihara 2003; Ifti et al. 2004; Hauert & Szabó 2005). For example, in PD games on random regular graphs (in which all individuals have the same fixed number of neighbours, but neighbours are drawn randomly from the population), the parameter range over which cooperators persist is larger than for regular lattices (Hauert & Szabó 2005). This is surprising because the formation of compact clusters is more difficult on random regular graphs. Also, Koella (2000) has shown that cooperation can persist in spatial PD interactions even if dispersal and interaction distances are allowed to evolve, leading to long-range dispersal and interactions in defectors, but not in cooperators. On the other hand, cooperation can be impeded if for any given individual there is a substantial difference between its interaction and its reproduction neighbourhood. (Ifti et al. 2004). For future research it will be an interesting topic to address these questions in greater detail, and in particular to study the evolution of lattice topologies and neighbourhood sizes.

Box 2: Kin-selection and population structure

The theory of kin selection (Hamilton 1964a,b) is often invoked to explain the origin of cooperation and the resolution of conflicts. The basic idea is that if a ‘helper gene’ causes its carrier to provide a benefit b to others at a cost c to itself, then the frequency of the helper gene only increases if the benefits fall sufficiently often to other carriers of the gene, e.g. because of relatedness between actor and recipient. Specifically, if r is the degree to which benefits accrue to other altruists compared with average population members, then Hamilton's rule specifies that the helper gene will increase from low frequencies if its inclusive fitness r b − c is greater than zero.

Kin selection is rarely considered in models of reciprocal altruism (for exceptions see e.g. Marshall & Rowe 2003), but it is possible to establish a connection between kin selection and the dynamics of cooperation in the spatial PD. It is generally thought that kin selection should operate in ‘viscous’ populations (Hamilton 1964a), in which limited dispersal promotes interactions among relatives. In the lattice models discussed here, population viscosity is obtained by assuming that individuals only interact with and disperse to neighbouring sites. The following simple argument illustrates that kin selection can benefit cooperation under these conditions. Imagine a homogenous lattice population consisting of defectors into which cooperators try to invade. An analytical argument based on the technique of pair approximation (van Baalen & Rand 1998; Le Gaillard et al. 2003, see also main text) shows that as long as cooperators are rare, every cooperator has on average approximately one other cooperator in its neighbourhood. Therefore, from playing a PD against each of its n neighbours the cooperator gets a total benefit of b and pays a total cost of n c. On the other hand, defectors get nothing, having on average only defectors as neighbours because cooperators are rare. As a result, cooperators can invade if b − n c > 0, or equivalently, if r b − c > 0, where r = 1/n is the average degree of relatedness of a cooperator to its neighbours. This could be considered as Hamilton's rule for the spatial PD, and inspection of Fig. 1a shows that the rule is quite accurate: for n = 4 cooperators should be able to invade if b > 4c, i.e. if r = c/(b − c) < 0.2, which is roughly confirmed by the numerical simulations.

It is worth pointing out that although spatial structure clearly favours cooperation in the PD (without spatial structure, cooperators would never thrive), the region of parameter space in which cooperators can persist is rather small. In terms of the spatial Hamilton's rule above, this is because the average relatedness of an invading cooperator to its neighbours is rather small. Thus, even though population viscosity is supposedly very high in lattice models with nearest–neighbour interactions, cooperators tend to have few cooperating neighbours during an invasion attempt. This can in turn be attributed to the fact that cooperators not only help each other, but also compete for lattice sites, thus limiting each other's proliferation.

In fact, Wilson et al. (1992) and Taylor (1992), and more recently West et al. (2002), have pointed out that population viscosity not only increases relatedness among cooperatively interacting individuals, but also increases competition for resources among relatives. West et al. (2002) show how these opposing effects can be incorporated into a modified version of Hamilton's rule that takes into account the relatedness of a cooperator to individuals who suffer increased competition from recipients of the cooperative act. The earlier results of Wilson et al. (1992) and Taylor (1992) indicated that the conditions for cooperation to thrive are exactly the same in well-mixed and in spatially structured populations, and hence that spatial structure may actually have no effect on the evolution of cooperation. However, these results may be too pessimistic, as spatial structure can favour cooperation not only in the spatial PD, but also in the corresponding lattice models for the Public Goods game (Mitteldorf & Wilson 2000; Hauert et al. 2002b; Szabó & Hauert 2002; see Box 3 for an explanation of the Public Goods game). In fact, the effect that competition between relatives counteracts kin selection is likely to be most pronounced in such lattice models, in which game interactions and competition occur among nearest neighbours. Moreover, Le Gaillard et al. (2003) have argued that through the minor change of allowing for empty lattice sites, the effect of competition between relatives becomes much weaker. In situations in which reproduction is local, but competition is global, e.g. because of high dispersal a scenario that Wilson et al. (1992) called ‘alternating viscosity’, competition between relatives will not be effective in impeding the evolution of cooperation through kin selection. West et al. (2001) described an empirical example in fig waSPS where intense local competition can indeed prevent cooperation despite potentially strong kin selection, and they supported this idea with recent microbial experiments (Griffin et al. 2004). In general, the extent to which local competition can counteract the beneficial effects of population viscosity in natural systems will critically depend on the particular form of population structure, and on the stages in the life cycle that are affected by cooperative acts (Wilson et al. 1992; van Baalen & Rand 1998; Le Gaillard et al. 2003). These questions deserve further theoretical as well as empirical investigations.

Continuous PD games

In the classical PD, cooperation is all or nothing, since this game has only two strategies. However, it is natural to assume that in real systems, cooperation can vary continuously. This idea has been present in other models of cooperation (e.g. Frank 1998), but continuous cooperative investments have only rather recently been incorporated into the PD (Mar & St Denis 1994; Killingback et al. 1999). In fact, it is straightforward to define a continuous version of the PD by assuming that cooperative strategies are defined by a real number x that lies in some interval [0, xmax], where xmax is the maximal possible investment. One then assumes that the benefit that an individual with trait value x provides to the opposing player is given by a benefit function B(x), whereas the cost that strategy x incurs to its carrier is given by a cost function C(x). Thus, if two individuals with trait values x and y play the continuous PD, player x gets the benefits from the cooperative investments y and incurs the costs from its own investment x, hence the payoff to player x is B(y) − C(x). Similarly, the payoff to y is B(x) − C(y). Typically, one assumes that the functions B(x) and C(x) are monotonically increasing and satisfy B(0) = C(0) = 0, as well as B(x) > C(x) at least for small x (otherwise mutual cooperation would be bad). For example, these functions could be linear: B(x) = bx and C(x) = cx, with b > c > 0. In such continuous games one would like to know the evolutionary dynamics of the cooperative trait x. In the section on the continuous Snowdrift game, we briefly describe how the theory of adaptive dynamics can be used as a general approach to investigate continuous games. For the continuous PD it is easy to see, and intuitively clear, that the trait x always evolves to 0, essentially because the cooperative trait only affects costs, but not the benefits of its carrier. Thus, defection prevails in the continuous PD and once again turns to investigating supporting mechanisms that can cause the trait x to evolve to non-zero levels.

The extensions considered to date are iteration, and spatial structure. In the continuous IPD, players make continuous cooperative investments over a number of rounds. For example, the investment in round n + 1 can be based on the opponent's investment in round n: xn+1 = f(yn). Wahl & Nowak (1999a,b) have investigated the case where the function f is linear: xn+1 = kyn + d. Cooperative strategies are characterized by high values of k and d, because when such strategies play against themselves, iteration quickly leads to large cooperative investments, and hence to large payoffs [in each round, payoffs are calculated as for the continuous PD, i.e. based on the benefit and cost functions B(x) and C(x)]. Wahl & Nowak (1999a,b) analysis is rather complex, but the general picture that emerges is nicely summarized in Figure 7 of their second paper (Wahl & Nowak 1999b): more cooperative strategies can gradually evolve, but once cooperation has reached a certain level, it becomes vulnerable to invasion by defecting strategies. This results in ever lasting cycles between cooperation and defection. In particular, cooperation cannot be stably maintained in this type of model.

In a similar vein, Roberts & Sherratt (1998) have devised a class of ‘raise-the-stakes’ strategies for iterated interactions that consist of increasing cooperative investments in response to an opponent's cooperation in the previous round. They have argued that these strategies do well against a number of traditional strategies in the IPD, such as TFT. However, in a continuous strategy space, evolutionary dynamics would gradually decrease cooperative investments in raise-the-stakes strategies (Killingback & Doebeli 1999). Thus, cooperation seems to be generally difficult to maintain in the continuous IPD if future investments are solely based on the current investment of the opposing player.

Things turn out to be different if investments in round n + 1 are based not just on the investment of the opponent, but on the net payoff received in the previous round. Here, xn+1 = f(pn), where pn = B(yn)−C(xn) is the payoff that an individual playing xn received when playing against yn. Killingback & Doebeli (2002) have analysed the case where the function f is linear, so that xn+1 = kpn + d. Cooperative strategies are again characterized by high values of k and d, but it should be noted that determining the dynamics of the investment levels during a single iteration is already a non-trivial problem. Nevertheless, Killingback & Doebeli (2002) have shown that cooperative strategies evolve if the benefits B(x) increase fast enough for small investments [i.e. whenever the slope B ′(0) is sufficiently large]. Thus, when continuous investments are based on previous payoffs, cooperation can evolve and persist in the continuous IPD. This echoes the findings from the classical IPD, where Pavlov-like strategies are generally more successful than TFT-like strategies that base their behaviour on the opponent's previous move, rather than on previous payoffs.

Interestingly, cooperation does not evolve if the continuous IPD is used as a model for mutualism between two different species. In this case, payoffs are obtained from continuous IPD interactions between members of different species, but competition for reproduction based on these payoffs occurs within species (Doebeli & Knowlton 1998). Scheuring (2005) has recently shown analytically that in this setting, cooperative strategies do not evolve in unstructured populations. However, Doebeli & Knowlton (1998) have shown that the evolution and maintenance of cooperation, and hence mutualism, are possible if the two interacting populations are spatially structured. Moreover, spatial structure can promote cooperation even in the continuous PD without iteration (Killingback et al. 1999), and can lead to coexistence of two distinct phenotypic clusters of high and low investors (Koella 2000). For lattice games with variable population sizes, the results of van Baalen & Rand (1998) and particularly those of Le Gaillard et al. (2003) imply that evolution of cooperation in the continuous PD should be the default expectation in spatially structured populations. Overall, it appears that the same mechanisms that support cooperation in the classical PD can promote cooperation in the continuous PD.

Other extensions of the PD

Iteration, spatial structure, and continuous investments, as described in the preceding paragraphs, are but three general types of extensions of the basic PD. Another important generalization consists of extending the PD to interactions among more than two players. The resulting N-player games are called Public Goods games and have a long tradition in the economics literature (Kagel & Roth 1995). Box 3 explains some basic aspects of Public Goods games and highlights interesting consequences of optional participation in such games. A different line of extensions of the PD is based on the idea that individuals may carry a reputation, and that players can condition their behaviour on the opponents’ reputation. This leads to the notion of indirect reciprocity (Alexander 1987; Nowak & Sigmund 1998; Panchanathan & Boyd 2004), which is the basis for the mechanisms of reward and punishment favouring cooperation in PD interactions. These concepts are explained in Box 4.

A related idea consists of considering tag-based games, in which cooperative interactions occur between individuals that are similar with respect to some neutral characteristic such as colour (Riolo et al. 2001; Hochberg et al. 2003; Axelrod et al. 2004). Tag-based cooperation appears to be prone to exploitation by unconditional cheaters (Roberts & Sherratt 2002), but further investigations of this interesting idea are called for. Overall, judging from the number of recent publications in high profile journals dedicated to the study of cooperation based on PD interactions, it is clear that this is a thriving line of research that attracts a lot of interest from a diverse array of scientists.

Box 3: Public Goods games and volunteering

The generalization of PD type interactions to groups of arbitrary size N is known as Public Goods games (Kagel & Roth 1995). In a typical Public Goods experiment a group of, e.g. six players gets an endowment of $10 each. Every player then has the option to invest part or all of their money into a common pool knowing that the experimenter is going to triple the amount in the pool and divide it equally among all players regardless of their contribution. If everybody invests their money, each player ends up with $30. However, each invested dollar only yields a return of 50 cents to the investor. Therefore, if everybody plays rationally, no one will invest, and hence the group of players will forego the benefits of the public good. In formal terms and assuming that players either defect or fully cooperate, the payoff for defectors becomes Pd = α nc γ/N, while the payoff for cooperators is Pc = Pd − γ, where α is the multiplication factor of the common pool, nc the number of cooperators in the group, and γ is the cost of the cooperative investment. As in the PD, defection dominates and cooperators are doomed. In fact, a Public Goods game in a group of size N is equivalent to (N − 1) pairwise PDs under the transformation b = α γ/N, c = (N − α)/[N(N − 1)] γ (Hauert & Szabó 2003). Under this equivalence, larger Public Goods groups correspond to larger numbers of single PD interactions. This implies that defectors can exploit cooperators more efficiently in larger groups, and hence that cooperation becomes increasingly difficult to achieve, which remains true even if interactions are iterated (Boyd & Richerson 1988; Hauert & Schuster 1998; Matsushima & Ikegami 1998). Interestingly, in experimental Public Goods games human subjects do not follow rational reasoning and often exhibit cooperative behaviour, thereby not only faring much better, but also undermining basic rationality assumptions in economics (Fehr & Gächter 2002). From a theoretical viewpoint, the reasons for this outcome are not fully understood but likely involve issues related to reward, punishment and reputation (Milinski et al. 2002), some of whose basic features are explained in Box 4.

Another approach to overcome the Public Goods dilemma is to allow for voluntary participation, which can be modelled by considering a third strategic type, called the loners (Hauert et al. 2002b). Loners are risk averse and instead of engaging in the Public Goods game rely on a small but fixed income σ [(α − 1)γ > σ > 0, where (α − 1)γ is the payoff for mutual cooperation and 0 the payoff for mutual defection]. This results in a rock-paper-scissors type dominance hierarchy of the three strategies: if everybody cooperates it pays to switch to defection, if defection dominates it is better to abstain and choose the loners option, and if loners abound, cooperation becomes attractive again, because it is likely that the effective group size in the Public Goods interaction is small and produces high returns. As a result, cooperators and defectors co-exist with oscillating frequencies. Thus, voluntary participation provides an escape hatch out of states of mutual defection and economic stalemate. Interestingly, the average payoff of all three strategic types, and hence the average population payoff, converges to the loner's payoff σ (Hauert et al. 2002a), which is better than a population payoff of 0 that would evolve in the absence of loners. The above dynamics of voluntary Public Goods interactions has recently been observed in experiments with humans (Semmann et al. 2003). We also note that in spatial voluntary Public Goods games, in which individuals interact only with a limited local neighbourhood (see section Spatial PD games), the average population payoff is usually greater than σ, i.e. the population draws a net profit from voluntary Public Goods interactions.

  • image(A)

[ Replicator dynamics of the voluntary Public Goods game. The three homogenous states of the population eloners, ecooperators and edefectors are unstable, reflecting the rock-scissors-paper type dominance hierarchy between cooperators, defectors and loners. There is an interior neutrally stable fixed point Q that is surrounded by neutrally stable closed orbits. True stability of Q or interior limit cycles can be obtained through various extensions of the model, e.g. by introducing spatial structure. Parameters: n = 5; α = 3; c = 1; σ = 1. ]

Box 4: Cooperation through reputation

Direct reciprocity can establish cooperation in repeated interactions following the simple rule ‘I help you and you help me’. However, in higher organisms, and humans in particular, cooperation may also be established through indirect reciprocity: ‘I help you and someone else helps me’ (Alexander 1987). The basic idea is that an individual can improve its reputation, or image score, by helping fellows in need. It thereby produces a costly signal, which in turn will be assessed by other members of the population and may trigger assistance in case the individual itself is in need. Indirect reciprocity requires some consensus about how behaviour affects reputation. How such a consensus is reached is an interesting question in itself that deals with the establishment of social norms (Henrich et al. 2001). If higher image scores increase the chance of receiving help in the future, then discriminating strategies that condition their help on an acceptable image score of the recipient can promote cooperation (Nowak & Sigmund 1998). However, such scoring strategies have one weakness: whenever they refuse to help a cheater with a low score, their own score drops and reduces the chances of future help. To avoid this, the concept of standing was introduced, whereby the individual remains in good standing if it refuses to help an ‘unworthy’ recipient (Leimar & Hammerstein 2001). This concept could be taken even further by demanding that an individual attains bad standing for helping an unworthy cheater. Investigating these questions is an active field (Brandt & Sigmund 2004; Ohtsuki & Iwasa 2004) and includes interesting experimental studies indicating that humans tend to favour the simpler scoring strategies (Wedekind & Milinski 2000; Milinski et al. 2002).

The concept of reputation also lends itself to studying the role of punishment and reward for cooperation. Punishment is common in nature (Clutton-Brock & Parker 1995), ranging from simple forms of spiteful toxin production in bacteria (Kerr et al. 2002) to institutionalized civil and criminal law in humans. The success of cooperators hinges on the ability to condition cooperation on information about the opponent's reputation, i.e. about whether the opponent punishes defection, and to adjust the behaviour accordingly (Sigmund et al. 2001; Brandt et al. 2003; Hauert et al. 2004). Such interactions with second thoughts occur in two stages: first individuals decide whether to cooperate or to defect; second, individuals may punish the opponent conditioned on the outcome of the first stage. This results in four basic behavioural types: the social strategy G1 that cooperates and punishes defection, the paradoxical strategy G2 that defects but punishes, the asocial G3 strategy that neither cooperates nor punishes, and finally the mild G4 strategy that cooperates but does not punish. G2 is paradoxical because it does poorly when facing other G2 players. In evolving populations, the asocial G3 eventually reaches fixation. The reason for this is that the social G1 cannot discriminate between other G1 and G4 players. Hence G4 players can increase in numbers through random drift and thereby facilitate successful invasions by the asocial G3. This outcome changes dramatically if reputation is introduced, i.e. if individuals may learn about the punishing behaviour of their opponent and adjust their cooperative behaviour accordingly. This is illustrated in Fig. B. In spatially structured populations in which interactions are limited to the nearest neighbours (cf. section Spatial PD games), punishment also promotes cooperation, and quite intriguingly can even enforce cooperation if the costs of cooperation exceed the benefits (Brandt et al. 2003).

In contrast to punishment scenarios, rewarding mechanisms seem to be limited to higher organisms, and perhaps even to humans. Interestingly, already the simplest models indicate that such mechanisms lead to complicated dynamics that make it much more difficult to establish and maintain cooperation (Sigmund et al. 2001). This is essentially because rewarding individuals are easily exploited, while it is impossible to exploit punishers. Consequentially, rewarding mechanisms do not allow for similarly clear-cut conclusions as are possible for the case of punishment.

  • image(B)

[ Dynamics of the PD with punishment and reputation. The four-dimensional strategy space foliates into invariant manifolds because, (x1x3)/(x2x4) is an invariant of the dynamics, where xi denotes the frequency of strategy Gi. The dynamics is illustrated on the manifold given by (x1x3)/(x2x4) = 1. The figure illustrates that reputation leads to bi-stable dynamics. Depending on the initial configuration, the population evolves either towards a pure social G1 state or a purely asocial G3 state. The basin of attraction of the two states is determined by the cost/benefit ratio of cooperation, as well as by the cost/fine ratio of punishment. It can be shown that under rather general conditions the social strategy G1 has the larger basin of attraction (Sigmund et al. 2001). ]

Models of cooperation based on the Snowdrift game

The SD game derives its name from a situation in which two drivers are trapped on either side of a snowdrift and have the options of staying in the car or removing the snowdrift. Letting the opponent do all the work is the best option, but if the other player stays in the car it is better to shovel, lest one never gets home (Sugden 1986). Similar situations occur whenever the act of cooperation provides a common good that can be exploited by others, but that also provides some benefits to the cooperator itself. For example, yeast secretes an enzyme that lyses their environment. The resulting food resource represents a common good that can be exploited by cells that do not secrete the enzyme. However, if no one else cooperates, a single cell may be better off producing the enzyme to prevent starvation, despite the prospects of being exploited (Greig & Travisano 2004). The fundamental difference between the PD and the SD is that in the SD, cooperation is the better option than defection when the opponent defects (Box 1). Consequently, cooperation is maintained at a mixed evolutionarily stable equilibrium. The SD has been used much less than the PD to study the problem of cooperation, because persistence of cooperation is not a problem. But since the SD is still a social dilemma (Box 1), it is a legitimate question to ask which mechanisms increase or decreases the evolutionarily stable proportion of cooperators in the SD.

One approach to this question consists of extending the SD by introducing additional strategies based on how individuals might behave in animal conflicts, such as conditional strategies whose behaviour depends on what the opponent does. Traditionally, such competitive interactions have been discussed in the context of the equivalent Hawk-Dove game (Maynard Smith 1982). For example the strategy Retaliator, which starts by displaying (C) but escalates (D) if the opponent does, is an ESS that coexists with another ESS corresponding to the mixed C–D stable state of the SD (Zeeman 1981; Maynard Smith 1982). Once a population has adopted the Retaliator ESS, everybody ends up cooperating, which is somewhat reminiscent of retaliating strategies like TFT in the IPD. If one then considers the additional strategy Bully, which starts by escalating (D) but retreats (C) against D, then Retaliator remains an ESS, but it now occurs together with a new ESS that consist of a Hawk-Bully polymorphism (Zeeman 1981; Maynard Smith 1982).

In a similar vein, one can consider behavioural asymmetries between opposing players. For example, whether individuals play D or C (i.e. Hawk or Dove) could depend on whether they do or do not hold a territory. Thus the strategy Bourgeois, which escalates (D), when it is the holder of a territory, but retreats (C) when it is not, is evolutionarily superior to both C and D under suitable conditions, e.g. when the chance of being the holder of a territory is 1/2 for all individuals (Maynard Smith 1982). Other asymmetric strategies have been investigated for example by Dubois et al. (2003) in the context of foraging theory. In particular, asymmetric strategies have been implicated in the evolutionary origin of food sharing in humans (e.g. Blurton Jones 1984).

Iterated interactions in the SD game have only recently been considered (Posch et al. 1999; Dubois & Giraldeau 2003). For example, Posch et al. (1999) study the role of aspiration levels in generalized deterministic win-stay, lose-shift strategies in 2 2 games, which include the SD and the PD, and show that the strategy Pavlov can establish cooperation in both games. The results of Dubois & Giraldeau (2003) suggest that iteration generally promote less aggressive, and hence more cooperative, behaviour. This indicates that iteration may be generally beneficial for cooperation in the SD game. Similar conclusions were reached by McElreath (2003), who studied the effects of indirect reciprocity in the Hawk-Dove game by considering iterated games with randomly selected opponents in each round. Good standing is obtained by defecting against defectors, and bad standing by cooperating with defectors. McElreath (2003) showed that a strategy termed ‘tough’, which defects if the opponent defects or has bad standing, but cooperates otherwise, often does well. Since this strategy cooperates with itself, these results show that the social dilemma of the SD game can be solved through indirect reciprocity in iterated games. However, other forms of reputation can hinder cooperation in this game (Johnstone 2001), and further investigations of reciprocity and iteration are needed to clarify their role for cooperation in the SD game.

Finally, an extension that seems natural is obtained by considering general N-player SD games. Box 5 outlines a general framework for social dilemmas in groups of arbitrary size that can be used to investigate the N-player SD game. Overall, the literature on the SD game is less extensive than that on the PD game. However, recent developments may help us to appreciate that the SD game can be a very useful metaphor for studying the problem of cooperation, as described in the next two sections.

Box 5: Snowdrift games in groups of arbitrary sizes

In analogy to the generalization of the PD to the Public Goods games (cf. Box 3), the pairwise SD can be generalized to interactions in groups of arbitrary size. In such N-player games, cooperators again contribute to a public good that is shared equally among group members regardless of their contributions, but now the marginal gain of additional provisions provided by each additional cooperator decreases with increasing numbers of cooperators (Hauert et al. 2005). For example, food provided by the first cooperator may be essential for an individual's survival, whereas additional food items are no longer vital and are thus less valuable, until eventually further food provisioning becomes useless because individuals are already saturated. This situation can be modelled by introducing a discount factor w such that the payoff for defectors and cooperators is given by PD(k) = b/N(1 + w + w2 + ⋯ + wk−1) = (b/N)(1 − wk)/(1 − w) and PC(k) = PD(k)−c, where k denotes the number of cooperators in the group. Hence, the first cooperator produces a benefit b that is shared among all N members of the group (including itself), the second one increases everyone's benefit by wb/N, and so on, so that the last of the k cooperators in the group provides a benefit of wk−1b/N. If w = 1, then all cooperators provide the same benefit b/N. If w < 1, then the benefits are discounted and each additional cooperator provides a lower benefit than the previous one. Note that one could similarly assume that the benefit of additional provisions are synergistically enhanced (w > 1) such that each additional cooperator provides a higher incremental benefit (Hauert et al. 2005).

The discounting framework provides a general framework to discuss cooperation: for c > b/N one recovers the Public Goods game, in which defection is dominant, and for cN/b < wN−1 cooperation dominates. However, for 1 > cN/b > wN−1 the replicator dynamics for the frequency x of cooperators has a unique stable interior equilibrium at x* = [1 − (cN/b)1/(N−1)]/[1 − w]. This corresponds to a generalization of the pairwise SD. The smooth transitions between the different game theoretical scenarios arising from variations of w, c or b allows discussions of cooperation in different kinds of social dilemmas (e.g. PD interactions, SD interactions, and by-product mutualism) and for groups of arbitrary size in a single framework. This may prove to be useful for empirical studies of cooperation (Connor 1995, 1996; Dugatkin 1996; Milinski 1996, Box 6).

Spatial SD games

Like any evolutionary game, the SD can be played on a spatial lattice by assuming that game interactions as well as competition for reproduction occurs locally among neighbouring individuals that occupy sites on a spatial lattice. Killingback & Doebeli (1996, 1998) were among the first to consider such spatial SD's. They showed that spatial structure facilitates the spreading of strategies such as Retaliator, which are reminiscent of the ‘nice’, ‘provocable’ and ‘forgiving’ strategies that play a central role in the evolution of cooperation based on the IPD (Killingback & Doebeli 1996). However, for the classical SD it has been realized only recently that, in stark contrast to the spatial PD, spatial structure is generally detrimental to cooperation, so that the fraction of cooperators in structured populations is generally lower than in well-mixed populations (Hauert & Doebeli 2004). More precisely, as in the PD spatial structure does favour cooperation in the SD for high benefits and small costs of cooperation (Fig. 2). However, already for moderate cost-to-benefit ratios, spatial structure results in lower frequencies of cooperators than expected from the well-mixed SD. This finding is robust against changes in the neighbourhood size and update rules, and it is supported by pair approximation (Fig. 2).

To understand why spatial structure has such contrary effects on cooperation in the SD and the PD, one needs to look at the geometry of cluster formation in spatial games. As we have illustrated in Fig. 1b, cooperators survive in the spatial PD by forming large, compact clusters. In contrast, in the spatial SD cooperators form small filament-like clusters (Fig. 2). These spatial patterns arise from local processes that are dictated by the payoff structure of the SD, which makes it advantageous to adopt strategies that are opposite to neighbouring strategies. As a consequence, an isolated cooperator acts as a seed for expanding dendritic structures, but lacks the ability to give rise to compact clusters (see Fig. 2c in Hauert & Doebeli 2004; Hauert 2005). On average, these emergent spatial patterns generate an advantage for defectors, owing to increased exploitation in the fractal-like zone of contact between the two strategies. This leads to an overall reduction in cooperators as compared with well-mixed SD populations, an effect that is most pronounced for high values of the cost–benefit ratio. Thus, the spatial SD game illustrates that spatial structure may not be universally beneficial for cooperation. As was illustrated in Box 2, the potential advantages arising for cooperators in spatially structured populations can be related to kin-selection and to the beneficial effects of inclusive fitness outweighing the negative impact of competition among relatives. In the spatial SD, the balance between these antagonistic forces may change, thus leading to a disadvantage for cooperators, but a thorough analysis of this conjecture must be left for future research.

The continuous Snowdrift game

Just as in the continuous PD, it is natural to assume that cooperative investments can vary continuously in the SD, and that costs and benefits of cooperative acts are again given by two functions B(x) and C(x). However, to define the continuous SD, one assumes that investments yield a benefit not only to the opponents, but also to the investing individual itself (Doebeli et al. 2004). Therefore, the payoff to an x-strategist interacting with a y-strategist is P(xy) = B(x + y)C(x), where B(x + y) specifies the benefit that the x-strategist obtains from the total cooperative investment made by both agents, and C(x) specifies the cost incurred to the x-strategist because of its own investment.

In contrast to the continuous PD, the outcome of the evolutionary dynamics of investment levels in the continuous SD is not obvious and, in fact, can exhibit surprising features. The evolution of the continuous trait x can be analysed using the mathematical framework of adaptive dynamics (Dieckmann & Law 1996; Metz et al. 1996; Geritz et al. 1998). We refer to Doebeli et al. (2004) for the details of this analysis, and we only summarize the main findings here. In general, the evolutionary dynamics depend on the form of the cost and benefit functions B(x) and C(x). If these functions are quadratic, so that B(x) = b2x2 + b1x and C(x) = c2x2 + c1x, the following types of evolutionary dynamics can occur (Fig. 3): first, the trait x may either monotonically increase or decrease over evolutionary time, in which case either pure defection or full cooperation evolves (which of these scenarios occurs may or may not depend on initial conditions, Fig. 3c–e). Second, the trait x may evolve to some intermediate value x* that represents an ESS (Fig. 3b). In this case, an intermediate level of cooperation evolves in the population. Third, the trait x may evolve to some intermediate value x* that represent an evolutionary branching point, i.e. a fitness minimum. In this case, the population splits into two distinct and diverging phenotypic clusters (Fig. 3a), one making very low and the other very high cooperative investments. Thus, after convergence to some intermediate level of cooperation, the population diversifies into co-existing defector and cooperator lineages. It is interesting to note that these lineages engage in interactions that take the form of the classical SD. Hence adaptive dynamics of continuous strategies yields a natural explanation of the evolutionary emergence of the pure cooperator and defector strategies of the traditional SD (see also Hauert 2005).

Figure 3.

Classification of evolutionary dynamics in the continuous SD for quadratic cost and benefit functions B(x) = b2x2 + b1x, C(x) = c2x2 + c1x. Darker shades indicate higher frequencies of a trait value. The singular strategies (dashed vertical lines) are indicated where appropriate. (a) Evolutionary branching (the dashed vertical line indicates the evolutionary branching point). (b) ESS (indicated by the dashed vertical line); we note that, in accordance with social dilemmas, the population payoff is not maximized at the ESS (Doebeli et al. 2004). (c) Evolutionary repellor (indicated by the dashed vertical line): depending on the initial conditions, the population either evolves to full defection or to full cooperation (two distinct simulations shown). (d) and (e) Unidirectional evolutionary dynamics in absence of singular strategies; in (d), cooperative investments decrease to zero – just as in the continuous PD; in (e), full cooperation evolves. Parameter values: population size n = 10 000, standard deviation of mutations 0.005, mutation rate 0.01 (i.e. on average one mutation in the investment level per 100 offspring), and the following cost and benefit parameters: (a) b2 = −1.4, b1 = 6, c2 = −1.6, c1 = 4.56; (b) b2 = −1.5, b1 = 7, c2 = −1, c1 = 4.6; (c) b2 = −0.5, b1 = 3.4, c2 = −1.5, c1 = 4.0; (d) b2 = −1.5, b1 = 7, c2 = −1, c1 = 8.0; (e) b2 = −1.5, b1 = 7, c2 = −1, c1 = 2.

Continuous SD games await further exploration. For example, more complicated benefit and cost functions can generate more complex evolutionary dynamics, and there are a number of ways in which one can envisage extending the analysis to interactions in larger groups, i.e. to N-player continuous SD games (Doebeli et al. 2004). Nevertheless, the results obtained so far illustrate a principle termed the ‘Tragedy of the Commune’ (Doebeli et al. 2004): in a cooperative system in which every individual contributes to a common good and benefits from its own investment, selection does not always generate the evolution of uniform and intermediate investment levels, but may instead lead to an asymmetric stable state in which some individuals make high levels of cooperative investments and others invest little or nothing. In humans, the Tragedy of the Commune could lead to conflicts based on the accepted notion of fairness. More generally, it could serve as a paradigm for the evolution of division of labour, thus potentially shedding light on problems such as the origin of multicellularity. The Tragedy of the Commune contrasts with the famed ‘Tragedy of the Commons’ (Hardin 1968), which occurs when cooperative acts only benefit others, and in which selection will act to eliminate altruism altogether, even though populations of altruists outperform populations of non-altruists. The more interesting, and perhaps often more realistic dynamics represented by the Tragedy of the Commune underlines the potential usefulness of extending the study of cooperation from PD interactions to more general types of interactions, notably those in which it only pays to defect if others are cooperating. Various such scenarios have recently been classified and studied by Jaffe (2004), and they might apply to many biological as well as sociological systems, as we will briefly discuss in the next section.


The successful promotion of cooperation hinges on positive assortment of cooperators, i.e. requires more frequent encounters of mutual cooperation than would be expected with entirely random encounters between different behavioural types (Queller 1985; Fletcher & Zwick 2004). Essentially all mechanisms capable of promoting cooperation do so by generating positive assortment between cooperative types: in repeated interactions successful strategies increase the number of rounds of mutual cooperation, in spatially structured populations clustering of cooperators naturally generates assortment, and with reward, punishment or indirect reciprocity, cooperation is conditioned on the recipient's reputation in order to decrease the propensity to support cheaters.

Generally speaking, positive assortment benefits cooperation in both the PD and the SD, because both games represent social dilemmas. However, whereas positive assortment is vital for cooperators in the PD, in which defection is dominant, persistence of cooperation does not hinge on positive assortment in the SD, in which cooperation can invade when rare. As a consequence of this fundamental difference, the effects of introducing additional structures and mechanisms can differ significantly between the two games. This is nicely illustrated in spatially structured populations: in the spatial PD, cooperators thrive by forming compact clusters, but because in the SD it is always best to cooperate if opponents defect, a different cluster geometry emerges in the spatial SD that ultimately decreases the frequency of cooperators. The fundamental difference between the PD and the SD also manifests itself with continuous investments, which generate the Tragedy of the Commons in the PD, representing the loss of altruism, but can result in the Tragedy of the Commune in the SD, representing diversification in cooperative investments.

The majority of game theoretical investigations of the evolution of cooperation are based on replicator dynamics. While this is certainly an excellent starting point that comes equipped with powerful and sophisticated mathematical theory, this approach generally harbours the significant deficiency of not taking ecological dynamics into account. It is becoming increasingly clear that considering the feedback and interplay between ecological and evolutionary dynamics is essential for understanding many evolutionary processes, as is exemplified by the many insights generated by the theory of adaptive dynamics (Geritz et al. 1998) in recent years. While some models have studied the effects of varying population sizes on the evolution of cooperation (e.g. Nowak et al. 1994b; van Baalen & Rand 1998; Harms 2001; Aviles 2002; Le Gaillard et al. 2003), this is mostly uncharted territory. To see why incorporating ecological aspects might be important for understanding the dynamics of cooperation, consider a situation in which defection has a detrimental effect on per capita growth rates. Then, if defection dominates, population density decreases, which could in turn lead to smaller groups of interacting individuals. But small interaction groups might be more conducive to cooperation, as e.g. exemplified by the Public Goods Game (Box 3). Thus, a feedback of strategy frequencies on ecological dynamics could lead to the maintenance of cooperation.

Incorporating ecological dynamics into models of cooperation appears to be challenging, because the resulting models can become rather complicated (Harms 2001; Aviles 2002; Le Gaillard et al. 2003), but it is likely very important for linking theoretical results with empirical studies. In fact, we feel that in order to achieve future progress in understanding the evolution of cooperation, the close collaboration of theoreticians and experimentalists is of utmost importance. The research reviewed in this article is proof that even two very simple theoretical metaphors, the PD and the SD, can give rise to a wealth of theoretical insights on cooperation. To a large extent, it remains to be seen which of these results are most relevant for natural systems. There are numerous promising empirical systems in which various aspects of the problem of cooperation can be studied. Some of these are outlined in Box 6. In particular, microbial microcosms appear to be well suited for experimental investigations of cooperative investments and the Tragedy of the Commune (Box 6). Bridging the gap between theoretical and empirical research and establishing tight and mutually inspiring cooperation between these two approaches is a major challenge for further progress in understanding the evolution of cooperation.

Box 6: Empirical examples

The models described in this review have potential applications in a wide array of natural systems, ranging from microorganisms to humans. In this box, we briefly list some empirical model systems that have been or could be used for investigations of the problem of cooperation.

Microorganisms:  In general, microbial systems are very promising for conducting experimental evolution studies for a variety of reasons, e.g. because of their short generation times, because they are comparatively easy to handle, and because a wealth of pertinent genetic and physiological information is available. In particular, such systems may prove to be very useful for studying cooperation (Travisano & Velicer 2004).

Viruses:  When viruses co-infect a cell, the replication enzymes they produce represent a common resource (e.g. Huang & Baltimore 1977, pp. 73–116). Turner & Chao (1999) showed that interactions between RNA phages co-infecting bacteria are governed by a PD in which viral cooperation is determined as the production of diffusible intracellular products. The same authors subsequently suggested that these interactions evolve into a SD with co-existing cooperators and cheaters (Turner & Chao 2003), thus potentially providing an example of the Tragedy of the Commune (Doebeli et al. 2004). More generally, when viruses co-infect a cell, the replication enzymes they produce are a common resource. Defective interfering particles (Huang & Baltimore 1977), which lack the coding regions for the replication enzymes, might evolve according to the Tragedy of the Commune.

Yeast:  The single-celled yeast Saccharomyces cerevisiae secretes an enzyme to hydrolyze sucrose (Greig & Travisano 2004). Since the enzyme is a common resource, the Tragedy of the Commune may apply, suggesting that there might exist yeast populations in which some cells produce the enzyme, while others do not, and simply use the enzyme produced by other cells. Exactly this situation has been engineered experimentally by Greig & Travisano (2004). It would be interesting to study the evolution of this polymorphism under various experimental conditions.

Antibiotic resistance in bacteria:  Resistance to β-lactam antibiotics is often because of bacteria secreting an enzyme, β-lactamase, that is responsible for inhibiting bacterial cell wall synthesis (Neu 1992). Since the β-lactamase is secreted outside the bacteria, it is a common resource. The Tragedy of the Commune would imply that, in certain situations, bacterial populations might evolve into two distinct subpopulations, consisting of high and low β-lactamase producers.

Spore formation and swarming in bacteria:  Wild-type strains of Myxococcus xanthus exhibit cooperative swarming mediated by extracellular pili (Velicer & Yu 2003). In lineages of M. xanthus unable to make pili, alternative mechanisms of cooperative swarming readily evolve (Velicer & Yu 2003). Myxococcus xanthus also aggregates into spore producing fruiting bodies during starvation, and one can find mutants with anti-social behaviour (Velicer et al. 2000). Thus, M. xanthus appears to be a promising system for further studies on cooperation, including the question of cooperative polymorphisms.

Adhesion in bacteria:  In heterogeneous environments, the bacterium Pseudomonas fluorescens forms cooperating groups by producing an adhesive polymer, which plays the role of a common good. Rainey & Rainey (2003) showed that cooperation (polymer production), is costly, but benefits the group. They propose that this system may be used to shed light on the evolution of multi-cellularity. Interestingly, the social amoeba Dictyostelium discoideum implements a tag-based (c.f. section Other extensions of the PD) mechanism for adhesive cooperation that is mediated by a single gene csA. If wild type cells are mixed with csA-knockout cells, the wild types direct the benefits preferentially to other wild type cells (Queller et al. 2003).

Higher organisms:  The study of cooperation has a long tradition in behavioural studies of higher organisms (Maynard Smith 1982; Dugatkin 1997) and experimental economics (Kagel & Roth 1995). We list only a few prominent examples.

Collective hunting and territory defense in mammals:  Detailed observations in lions and baboons, for example, support the notion that selection would favour a state in which certain individuals invest heavily in hunting and territory defense, while others invest little (Packer 1977; Packer & Ruttan 1988; Heinsohn & Parker 1995). Thus, such cooperative interactions may be governed by the SD game.

Predator inspection in fish:  In many species, some individual fish move away from their shoal and approach a potential intruder (Milinski 1987; Pitcher 1992). The information that the inspectors obtain can be viewed as a common resource (Magurran & Higham 1988). Even though such systems have been investigated rather extensively (see e.g. Milinski et al. 1997), it is still debated whether predator inspection generally follows a PD, SD or by-product mutualism.

Egg trading in simultaneous hermaphrodites:  During mating in seabass (Serranidae), individuals divide their eggs into parcels and alternate with their mates in offering these parcels for fertilization. Brembs (1996) discusses this system in the context of the IPD.

Sentinel behaviour in mammals:  Group members of some animal societies take turns acting as sentinels. Clutton-Brock et al. (1999) suggest that in meerkats (Suricata suricatta), guarding may be an individual's optimal strategy if no other animal is on guard and hence sentinel behaviour may be governed by SD interactions.

Blood sharing in vampire bats:  When vampire bats fail to find food, they are often fed by successful roostmates. Wilkinson (1984) suggested that these interactions may represent IPD interactions.

Cooperation in humans:  There is a large literature on experimental studies in humans (e.g. Kagel & Roth 1995). Humans are generally a very promising experimental system for game-theoretic models of cooperation. Fehr & Fischbacher (2003) argue that experimental evidence indicates the ubiquity of human altruism, even though altruism and selfishness often occur together, which points to the potential usefulness of the continuous SD as a mode for cultural evolution. Also, the concepts of punishment and fairness (Fehr & Gächter 2002; Fehr & Rockenbach 2003), of indirect reciprocity (Wedekind & Milinski 2000; Milinski et al. 2002), and of volunteering (Semmann et al. 2003) have been shown to apply in humans.