Prisoners or volunteers?
The tragedy of the commons (Hardin, 1968) is usually cited (Rankin et al., 2007) as an analogy to explain why common resources are overexploited. Hardin (1968) used it as a metaphor to point out that individual self-interest does not necessarily lead to a benefit for the society, and that indeed in most situations Adam Smith’s ‘invisible hand’ leads to a bad result for the society. His rebuttal was not a formal model but a verbal discussion based on the following example. Imagine a group of herders grazing cattle on common land; each herder only gains a benefit from his own flock, but when a herder adds more cattle to the land to graze, everyone shares the cost, which comes from reducing the amount of forage per cattle. Hardin (see also Rankin et al., 2007) goes on stating that if the herders are driven only by economic self-interest, they will each realize that it is to their advantage to always add another animal to the common: they sacrifice the good of the group (by forgoing sustainable use of the resource) for their own selfish gain. Thus, herders will continue to add animals, eventually leading to a ‘tragedy’ in which the pasture is destroyed by overgrazing. This metaphor is useful to introduce the concept of social dilemmas, but can be misleading if the tragedy of the commons is equated to an N-person PD.
Note that, strictly speaking, the PD is a two-person game, and there is no such thing as an N-person PD without defining its structure, unless it means a game in which N individuals play the PD with pairwise interactions. Usually, however, an N-person PD is assumed to be the following game: individuals can be cooperators or defectors; cooperators pay a cost for contributing to the public good, whereas defectors refrain from doing so; after all individuals are given the chance to contribute to the public good, the accumulated contribution is multiplied by an enhancement factor, and the total amount equally shared among all individuals (cooperators and defectors). Equating the tragedy of the commons to an N-person PD as defined above is misleading for the following reasons.
First, an N-person PD is a game, whereas the tragedy of the commons is a description of the equilibrium of a game: it simply means that the game is a social dilemma (the best strategy for the individual does not lead to the optimal outcome for the society). However, although it is true that an N-person PD leads to a Pareto inefficient equilibrium, there are other social dilemmas that are not an N-person PD. The volunteer’s dilemma is one example. The stag-hunt game (Skyrms, 2004) is another.
Second, most important, the fact that ‘it is to their advantage to always add another animal to the common’ (Hardin, 1968; Rankin et al., 2007) is taken for granted, but it is by no means necessary. Imagine 10 herders with 10 cows each. If resources are depleted when 100 cows graze the pasture, each herder will introduce nine cows but will find it profitable to introduce the 10th only if at least one herder volunteers not to introduce his own 10th. It is not true that it is always an advantage to introduce one more cow, if we assume that 100 cows lead to complete resource depletion and that resource depletion is more costly than volunteering not to add the 10th cow. If, instead, the cost of complete resource depletion is irrelevant (smaller than the cost of not introducing the 10th cow), for example because it only affects future generations, introducing the 10th cow will always be profitable (as assumed by Hardin), but this would not be a social dilemma because resource depletion will be irrelevant for the current players. Either resource depletion is costly (more costly than volunteering not to introduce the 10th cow), and in this case we have a social dilemma like the volunteer’s dilemma; or resource depletion is not costly, and in this case we have no social dilemma (as defined by game theory) at all. Note that, although resource depletion that affect future generations is indeed a tragedy for the society, it is not a social dilemma in the sense of game theory, because it does not affect the payoffs of the current players.
Rather than trying to understanding what kind of game Hardin had in mind, however, it is important to establish what kind of social dilemmas are relevant in biology. Although some situations in biology are likely to be N-person PDs, all the examples cited in the introduction are certainly more similar to a volunteer’s dilemma than to an N-person PD. Some social dilemmas that are clearly volunteer’s dilemmas have been classified as PD probably simply because it was assumed that any social dilemma is a PD. Some other cases have been classified as snowdrift game (SG), although in fact they are also volunteer’s dilemmas because they do not involve pairwise interactions. Interactions between RNA phages co-infecting bacteria, for example, have been first described as PD (Turner & Chao, 1999) and subsequently as a SG (Turner & Chao, 2003). However, when viruses co-infect a cell, the replication enzymes they produce are a public good and interactions are collective, like in the volunteer’s dilemma. Another clear and recent example of a case classified as SG and which is, instead, a volunteer’s dilemma, is invertase production in yeast (Gore et al., 2009). Collective hunting and territory defence in mammals, defined as a SG by Doebeli & Hauert (2005) are also volunteer’s dilemmas. Sentinel behaviour, in not a SG (Doebeli & Hauert, 2005) but the asymmetric equilibrium of the volunteer’s dilemma (different from the symmetric mixed equilibrium discussed here), which requires coordination.
Biological volunteer’s dilemmas
I have described a generalized model of the volunteer’s dilemma, arguing that it applies to many cases of biological interactions. It is relevant for situations in which one or few individuals are enough to perform a costly action that produces a common good.
Microbes and social species often produce and consume resources that are costly, and each individual would find it more profitable to avoid the cost of producing them, but would find it more profitable to pay this small cost than to pay the larger cost that would occur if nobody produced the resource: replication enzymes for viruses co-infecting a cell (Turner & Chao, 2003), adhesive polymers in bacteria (Rainey & Rainey, 2003) and invertase in yeast (Gore et al., 2009), as well as alarm calls in vertebrates (Searcy & Nowicki, 2005), are typical examples of diffusible, public goods. Collective breeding is another example, and even in the extreme case of D. discoideum, in which individuals that form the stalk die (c = 1), the volunteer’s dilemma can explain the existence of an intermediate number of volunteers; in this case a certain degree of relatedness between group members is necessary (which is the typical case in Dictyostelium). In general, however, volunteering does not require any relatedness.
It is important to point out that volunteering does not require any relatedness nor reciprocation. Each individual will volunteer with a certain probability for his own benefit. Relatedness, as we have seen, affects the results, but it is by no means essential. Reciprocation instead does not play any role here, although it would be interesting to model an iterated version of the volunteer’s dilemma and see what happens in the repeated game.
In the volunteer’s dilemma cheaters are maintained in a mixed equilibrium but they do not replace volunteers completely, because the complete lack of volunteers is more costly than the cost of volunteering. Therefore, each individual will volunteer with a certain probability; in alternative, a polymorphic population will exist with both cooperators and defectors. The problem of the evolution of cooperation, modelled as a volunteer’s dilemma, is not to explain why cheaters do not invade; this is the usual question raised by public good games when they are modelled as an N-person PD, but it is not the case here. Cheaters, in the volunteer’s dilemma, do invade and are maintained at the mixed equilibrium. The volunteer’s dilemma, however, like the PD, leads to a disappointing result for the society, because the more individuals are available to volunteer, the less likely it is that someone actually volunteers and the public good is produced.
The fact that invasion by cheaters does not represent a problem for the volunteer’s dilemma still leaves us with a problem: how to increase the probability that the public good is produced? This is a practical question for which a technical solution can be envisaged (Hardin, 1968), something that, when it involves human interactions, does not require changing our view of morality and social rules. In evolutionary games, it requires that a strategy that increases the benefit for the group has also an advantage for the individual, so that it can evolve by natural selection. My suggestion is that this can be achieved by brinkmanship, the deliberate increase of the damage that occurs when the public good is not produced.
Schelling (1960) has introduced the idea that players in situations of conflict may create a deliberate risk (brinkmanship) as a strategy to induce the other players to adopt a certain behaviour. Brinkmanship has been discussed mainly in the field of international relations, but it can be applied to cooperation and conflict among individuals.
Here, I apply it to the volunteer’s dilemma. In this case, brinkmanship is achieved by increasing the damage that would result from not producing the public good. Increasing the cost a paid when the public good is not produced could be an effective evolutionary strategy to increase the level of cooperation because, as we have seen, it increases the frequency of volunteers, and because the cost a is shared by all members of the group. A mutant that induces a higher cost a will affect equally the fitness of all group members when the common good is not produced and therefore will not create differences in relative fitness among group members. On the other hand as we have seen, this will increase the probability that the common good is produced. Individuals in groups with higher a therefore will have higher fitness than individuals in groups with low a, and if they compete with each other, an increase of a will be favoured. Increasing the cost a paid when nobody volunteers, therefore, would be an effective strategy to increase cooperation among group members not only if enforced by an external authority, but also in evolutionary games, in which mutants for higher a could invade and go to fixation. Clearly, following an increase of this cost (a), the probability of volunteering would not change immediately if it is genetically determined, but it will be adjusted over evolutionary times; if, instead, it is a rational response to a perceived risk (this might be the case in humans and other rational animals) it might change immediately or after a relatively rapid learning process.
Cases in which cooperation is favoured as a result of increasing the cost of cheating (or reducing the benefit of cheating) have been discussed especially in the social insects (Wenseleers & Ratnieks, 2006; Ratnieks & Wenseleers, 2008), but in these cases, the cost is an individual cost for cheating rather than a cost due to the failure to produce a public good. Brinkmanship as a strategy for public good games is probably more well-known in human social dilemmas. During the cold war, for example, being on the brink of disaster was the strategy to actually avoid a nuclear escalation (Schelling, 1960).
The concept of brinkmanship can have practical applications. Imagine, for example, a number of individuals discarding their waste in the environment, after which one or some of them can volunteer to pay a small cost for cleaning up; if nobody does it, the resulting damage is greater than the cost of cleaning up. This is a volunteer’s dilemma; a way to increase the probability that somebody does the cleaning up would be to increase the deleterious effects of the waste. This strategy might not sound appealing to a public authority, but it is perfectly rationale, and in effect does not require to be enforced by an external authority; it could be achieved by a single individual that made its waste more toxic (increased a), provided this was common knowledge among players, and provided the right costs and benefits (a > c) exist. It must also be the case that these individuals, after having the opportunity to cooperate with others at a local scale, compete with others on a more global scale.
This idea, that a higher risk induces higher cooperation is the concept of brinkmanship applied to public goods games. The rationale is that increasing the cost paid when the public good is not produced makes volunteering more likely. It is the same rationale behind the (counterintuitive) result that the probability that someone volunteers decreases with group size.
Reducing the cost c paid by volunteers, instead, does not seem an effective strategy. Obviously, it leads to an increase of volunteering. However, a mutant individual with a lower c would create an asymmetry in the group, and he would be the first to volunteer, because volunteering would be less costly for him. The presence of asymmetries in the costs or benefits is usually recognized as the solution to the volunteer’s dilemma in the social sciences (Nalebuff & Bliss, 1984; Weesie, 1993). Mutants with a low c, however, would not invade because they would always be the ones that pay the cost of volunteering and have a lower fitness (unless having a lower c implies that these mutants do better in other situations, but then this case involves fitness effects beyond cooperation, which is not interesting for our discussion). Reducing c, therefore, although it could be enforced by an external authority, cannot be an evolutionary strategy to increase the probability that the public good is produced.
Another possible strategy for increasing the probability that the public good is produced is setting the number of individuals required to produce the public good (k) to the optimal value. As we have seen, this value is usually the lowest or the highest, and never intermediate. This strategy, however, must be enforced by an external authority. One possible biological example is the following. Imagine a mychorrizal association in which individual fungi must cooperate among themselves to maintain a symbiosis with a tree (fungi receive carbohydrates and in exchange the tree manages to absorb more water and nutrients from the soil; Kiers & Denison, 2008). If the tree reacts to the average amount of nutrients, cutting its supply of carbohydrates to the fungi unless a certain amount of nutrients is absorbed, the fungi play a volunteer’s dilemma (among themselves; their cooperative behaviour among themselves is not the same as the mutualism established with the tree). In this case, the tree can actually act as an external authority. By deliberately increasing his dependence on the fungi, that is the amount of nutrients required from the symbiosis, the tree can in effect increase k and by doing so increase the probability that the fungi cooperate and the common good is produced. Examples like this, however, are probably rare in nature and optimizing k is probably a more relevant strategy for human interactions.
Increasing the damage suffered when the public good is not produced (brinkmanship), instead, seems a practical, efficient strategy to increase cooperation. It increases both individual fitness and the benefit for the group, and does not require any enforcement by an external authority. This is a rather surprising solution for a social dilemma. Hardin (1968) suggested a technical solution for the problem of cooperation, one that should be enforced by an authority and which does not require changing our view of morality and altruism. The volunteer’s dilemma suggests that, perhaps against our ideal of morality, one can remain selfish and actually increase the benefit for the society.