SEARCH

SEARCH BY CITATION

Keywords:

  • bargaining theory;
  • cooperative game theory;
  • equilibrium selection problem;
  • nurturing behaviour

Abstract

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

This paper surveys the economic theory of bargaining with a view to applications in biology, using Roughgarden’s recent Genial Gene as a case study in mistakes to be avoided.


Building on the past

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

The joint behaviour of mated pairs of animals who cooperate in raising their young can be seen as the successful resolution of what economists would call a bargaining game, in which the two parents negotiate over who contributes how much to their joint venture. Such a bargaining game necessarily sits on top of the kind of evolutionary game that is a staple of the biological literature.

A number of biologists have explored the possibility of using this approach to model the nurturing behaviour of animals from species in which parents jointly care for their offspring: for example, McNamara et al. (1999), Johnstone & Hinde (2006) and Periera et al. (2003). As one of the architects of the economic theory of bargaining (Binmore et al., 1982; Binmore & Dasgupta, 1987), I am delighted at this initiative, which I believe to be an important development in evolutionary biology. However, it will be a pity if a full development of the approach has to wait until evolutionary theorists reinvent ideas that are standard in economic theory, as has been the case in previous biological applications of game theory. For example, Trivers (1971) was unaware of the folk theorem of repeated game theory when he reinvented the concept of reciprocal altruism, which had been around for 20 years or more at the time he wrote. Maynard Smith’s (1982) path-breaking Evolution and the Theory of Games never mentions the idea of a Nash equilibrium, even though Nash equilibrium is the fundamental concept in game theory. Both ideas are similarly absent from Axelrod’s (1984)Evolution of Cooperation.

Game theorists must shoulder some of the blame for failing to communicate their ideas adequately to the wider scientific community, but there seems no good reason for biologists to continue to reinvent game-theoretical wheels, especially when the reinvented wheels are sometimes square.

The suggestion is not that bargaining models developed for use in economics can be taken down from the shelf and used straight from the package. Each new species will require a new modification of the basic theory. However, such an approach will fail if the basic theory has not been fully understood, as in Roughgarden’s (2009) study of bargaining between mated pairs in his recent Genial Gene.

This paper seeks to provide an introduction for biologists to the economic theory of bargaining. A fuller introduction appears as chapters 16 and 17 in my book Playing for Real (Binmore, 2007). Roughgarden’s (2009) book will be used as a source of errors of interpretation that need to be avoided. In particular:

  • 1
    Implicit agreements among animals need to be self-policing, if they are to be honoured in practice. When the theory of repeated games can be applied, the folk theorem implies that this requirement does not limit the attainable levels of fitness to any great extent.
  • 2
    Orthodox bargaining theory is entirely compatible with the kind of methodological individualism that orthodox evolutionary biology takes for granted.
  • 3
    There is no schism between cooperative game theory and noncooperative game theory to match the divide that Roughgarden sees between her own followers and the supposedly homophobic neo-Spencerians championed by Richard Dawkins.
  • 4
    Rubinstein’s bargaining model is perhaps the most promising starting point for a theory of bargaining in biology. But even in economic applications, it is not always straightforward to relate the parameters of the model to an environment within which it is to be applied.

Noncooperative game theory

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

Noncooperative game theory is often explained to students of evolutionary biology using simple illustrative games like the Prisoners’ Dilemma, the Battle of the Sexes and the Hawk-Dove Game (which game theorists traditionally call Chicken). Such games with only two pure strategies are sometimes adequate when studying what Roughgarden calls the evolutionary tier. However, it will usually be necessary to think much harder about the structure of the games that need to be constructed when considering a behavioural tier. It hardly needs to be said that game theory cannot provide much illumination if applied to the wrong game.

Extensive forms

Roughgarden (2009, p. 145) is perhaps somewhat behind the times in thinking that evolutionary game theory necessarily begins with a payoff matrix and so does not cater for dynamic interaction between the players during the play of a game. But if game theory did not cater for dynamic interaction between the players, how could it have anything meaningful to say about bargaining?

In fact, Von Neumann & Morgenstern’s (1944) foundational book does not begin with payoff matrices. It begins with the extensive form of a game, which takes account of all possible dynamic interactions by setting out who can do what, when they can do it and what they then know. Chess and Poker are the standard examples of games in extensive form. For games like Poker, a fictional player called Chance makes some of the moves with given probabilities.

Two extensive forms are shown in Fig. 1, using the payoffs from an illustrative model proposed by Roughgarden (2009). The extensive form on the left is called a simultaneous-move game, because it does not matter who moves first provided that the player who moves second is unaware of the move made by the first player. The extensive form on the right is identical except that the information set which denies the second player knowledge of the first player’s move has been removed. The female therefore has two different moves depending on which choice is made by the male.

image

Figure 1.  Extensive forms. The game trees shown begin at their roots, which represent the first move of the games. The male’s payoffs are written in the southwest of the boxes that represent the leaves of the tree where the games end. The female’s payoffs are written in the northeast of the boxes. The left game has an information set and the right game does not. The information set says that the female does not know whether she is at her left or right decision node when she moves. That is to say, she must decide whether to guard the nest or catch worms without knowing whether the male has decided whether to guard the nest or catch worms. The thickened branches of the tree on the right show the actions that will be taken in a subgame-perfect equilibrium of the game.

Download figure to PowerPoint

Strategic forms

The strategic or normal form of a game (which Roughgarden knows as a payoff matrix) is obtained from the extensive form by a standard trick. One first identifies the strategies available to each player. These are plans of action that specify what the player will do under any contingency that might arise during the game. (In Chess, the number of such strategies is unimaginably large.) If all players commit themselves to one of their strategies, the resulting profile of strategies entirely determines the course of the game, apart from any chance moves that may have been built into its rules. We can therefore work out the average payoff to each player for every strategy profile. These are the numbers written in the familiar payoff matrices of evolutionary game theory. The only difference in the general case is that the payoffs need not be interpreted as fitness.

Figure 2 shows the strategic forms of the games of Fig. 1. The strategic form on the left is the basis of Roughgarden's (2009) analysis. Appropriately enough, it is a variant of a canonical game traditionally called the Battle of the Sexes, and I shall call it by that name, as it differs only in replacing the payoff pair (0,0) by (1,1) (Luce & Raiffa, 1957). The immediate point is that when one takes account of the possibility of dynamic interaction during the game, one can be led to much more complicated strategic forms than those traditional in evolutionary game theory.

image

Figure 2.  Strategic forms. Roughgarden's (2009) payoff matrix is a variant of the Battle of the Sexes. It corresponds to the simultaneous-move extensive form in Fig. 1. The strategic form labelled Ladies Last corresponds to the extensive form in which the female observes the male moving first. She has four pure strategies in this game (rather than two). For example, cg denotes the pure strategy in which she catches worms if the male guards the nest, and she guards the nest if the male catches worms. The starred payoffs indicate best replies. Cells in which both payoffs are starred correspond to Nash equilibria in pure strategies. Both strategic forms also have Nash equilibria in mixed strategies.

Download figure to PowerPoint

Equilibrium

A Nash equilibrium is a strategy profile – with one strategy for each player – in which each player’s strategy is a best reply to the strategies of the other players.

Roughgarden (2009) says that Nash equilibrium is short for Nash competitive equilibrium. The notion of a Nash equilibrium at which players cooperate then seems oxymoronic. It is true that cooperation is impossible in a strictly competitive game, which is defined as a two-player game in which the players’ interests are diametrically opposed. Because the payoffs in each cell of the strategic form of such a game can be chosen to sum to zero, strictly competitive games are more frequently called zero sum. Roughgarden (2009) seems to think that a competitive game is any game to which the methods of noncooperative game theory are applied. She therefore refers to noncooperative game theory as competitive game theory, but this is not a useful innovation.

The cells in Fig. 2 in which both payoffs are starred correspond to Nash equilibria in pure strategies, but both games also have Nash equilibria in mixed strategies. Nash (1951) proved that all finite games have at least one Nash equilibrium if mixed strategies are allowed. It is ironic that the editor of Nash’s paper deleted his explanation that Nash equilibria admit an evolutionary interpretation on the grounds that only the rational interpretation is of interest!

The proof of Nash’s theorem shows that symmetric games have at least one symmetric Nash equilibrium. As symmetric games like the Battle of the Sexes of Fig. 2 arise very commonly in evolutionary biology, it is perhaps worth noting that only the male’s payoffs are usually written down when describing such a game. The female’s payoffs can be deduced from the fact that the game is symmetric. A similar convention applies with zero-sum games, but then the female’s payoffs must be deduced from the fact that they are the negative of the male’s.

Mixed strategies

A mixed strategy arises when players are allowed to randomize over their pure strategies with probabilities of their choice. For example, the (symmetric) mixed strategy in which each parent catches worms with probability one-fifth is a Nash equilibrium in the Battle of the Sexes of Fig. 2. This strategy makes the opponent indifferent between both pure strategies, so that any strategy (pure or mixed) becomes a best reply. In evolutionary biology, it is usually appropriate to interpret mixed strategies as ‘polymorphic equilibria’ in a larger population game. The particular male and female under study are then modelled as being chosen to play the Battle of the Sexes from large populations of males and females, each of which has one-fifth of its membership programmed to catch worms and four-fifths to guard the nest.

Minimax and maximin

Alice’s maximin payoff is the most that she can be sure of getting on the assumption that her opponent will guess her strategy and act to minimize her payoff. Her maximin strategies are those that guarantee her maximin payoff or better. Alice’s minimax payoff is the least than she can be forced to suffer on the assumption that she will guess her opponent’s strategy and act to maximize her payoff. (Commentators often say minimax when they mean maximin.)

Von Neumann (1928) proved that a player’s minimax and maximin payoffs in a finite, two-player, zero-sum game are equal when mixed strategies are allowed. It follows that all Nash equilibria of such games are found by pairing up the players’ maximin strategies.

Incomplete information

If nothing else is said, game theorists take for granted that the rules of the game, the preferences of the players and the probabilities attached to chance moves are all common knowledge among the players.

In a ‘game of incomplete information’, this large assumption is relaxed. Harsanyi (1967) argued that one can often reduce a situation with incomplete information about the types of the players – their preferences and beliefs – to a game as normally understood, by introducing an initial chance move that chooses who will be required to play from a (possibly very large) set of potential players of varying types. Who knows what about this chance move is then determined by introducing appropriate information sets.

A game created in this way is called a Bayesian game (because the first thing players will do is to use Bayes’ rule to update their beliefs to take account of the fact they have themselves been chosen to play). A Nash equilibrium of such a Bayesian game is commonly called a Bayesian equilibrium.

The use of this technique has led to big practical successes for game theory in the design of big-money auctions in the telecom industry and elsewhere. It does not seem to have been exploited very much as yet in evolutionary biology, although it seems a very natural next step.

Dynamics

A dynamic process that always moves in the direction of increased payoffs can only stop at a Nash equilibrium, and hence its interest for evolutionary biologists. The idea of an evolutionarily stable strategy (ESS) is closely connected (Maynard Smith & Price, 1972). If all players use the same ESS in a symmetric game, the result is a symmetric Nash equilibrium, but not all symmetric Nash equilibria correspond to an ESS. (In asymmetric games played by animals drawn from different populations, evolutionary stability reduces to the idea of a strict Nash equilibrium – one in which there are no alternative best replies to the equilibrium strategies.)

The simplest model of evolutionary dynamics that is anywhere near realistic is called the replicator dynamics. If the replicator dynamics converge, they always converge on a Nash equilibrium, but this equilibrium need not to be implemented by evolutionarily stable strategies in symmetric games with three or more pure strategies (Hofbauer & Sigmund, 1998). It follows that the idea of an ESS can only be entirely satisfactory in symmetric games with no more than two pure strategies.

Figure 3 shows the trajectories of the replicator dynamics (with two populations) in the Ultimatum Minigame (Binmore et al., 1993). The immediate point is that each of the infinite number of Nash equilibria of the game is the endpoint of at least one trajectory.

image

Figure 3.  The Ultimatum Minigame. In the famous Ultimatum Game, Alice may offer any fraction of a sum of money to Bob, who must accept or refuse. If he refuses, both receive zero. The Ultimatum Minigame is a simplified version in which each player has only two pure strategies. The game has two Nash equilibria in pure strategies: (fair, no) and (unfair, yes) It also has an infinite number of mixed Nash equilibria in which Alice makes the fair offer and Bob plans to refuses an unfair offer with probability at least one-third. The points in the square represent states in which some fraction of a population of proposers are programmed to make fair offers, and some fraction of a population of responders are programmed to accept. Several trajectories of the replicator dynamics are sketched. Every Nash equilibrium is the endpoint of at least one such trajectory, but only the Nash equilibrium (unfair, yes) is subgame perfect. In the full Ultimatum Game, the only subgame-perfect equilibrium awards all the money to Alice, but it usually passes unnoticed that every split of the money is also a Nash equilibrium outcome.

Download figure to PowerPoint

Subgame-perfect equilibrium

In introducing Nash equilibria, Roughgarden (2009) contemplates what actions will be taken if one player moves first and the other responds with a best reply. She then observes that the result will be a Nash equilibrium in the Battle of the Sexes. If the female makes a best reply to the male, the actions taken in Ladies Last do indeed constitute a Nash equilibrium in the Battle of the Sexes, but so what? The players move simultaneously in the Battle of the Sexes.

It is particularly important not to introduce this confusion in a bargaining context, because we shall see that it is important that we must be clear about the distinction between a Nash equilibrium and a subgame-perfect equilibrium. A subgame-perfect equilibrium is a Nash equilibrium that induces Nash equilibrium play in every subgame – whether the subgame is reached in equilibrium or not (Selten, 1975). Subgame-perfect equilibria are used in economics when credible pre-play commitments to refuse to play Nash equilibria under certain specified contingencies are thought to be implausible.

One finds subgame-perfect equilibria in finite games with no information sets using backward induction, starting at the end of the game and working forward. In the extensive form of Ladies Last, the optimizing actions for the female at each of her two decision nodes in Fig. 1 have been thickened. Assuming that she would always optimize, the best action is then for the male to take the action thickened for him. The pair of strategies indicated in this way in Fig. 1 is a subgame-perfect equilibrium. All subgame-perfect equilibria are Nash, but Nash equilibria need not be subgame perfect. For example, the Nash equilibrium $(c,gg)$ in Ladies Last is Nash but not subgame perfect. Similarly, only the Nash equilibrium (unfair, yes) in the Ultimatum Minigame of Fig. 3 is subgame perfect.

The distinction between Nash equilibria and subgame-perfect equilibria is particularly important in an evolutionary context, because the conditions under which it is reasonable to defend subgame-perfect equilibria as representing the limits of evolutionary processes remain unclear. Even in very simple games, the replicator dynamics can converge on Nash equilibria that are not subgame perfect (Binmore et al., 1993), as indicated in Fig. 3 for the Ultimatum Minigame. The general case is studied by Cressman (2003).

Selten (1983) argues in favour of subgame-perfect equilibria in an evolutionary context by arguing that we should always look at a perturbed version of a game in which the possibility of errors in a player’s choice of strategy is not excluded. He assumes that every player is afflicted with a ‘trembling hand’ that chooses each unintended pure strategy with some small probability P. He then redefines his original definition of a perfect equilibrium to be a limit point of Nash equilibria in such a perturbed game as P tends to 0 (Selten, 1975). The new definition coincides with what we now call a subgame-perfect equilibrium in finite games with no information sets.

Nobody doubts the importance of stability, but trembling hands are not the only kind of perturbation. For example, perturbations in the evolutionary dynamic process can completely dominate the effect of trembling hands or payoff perturbations (Binmore et al., 1997). Making judgements on such issues is one of the problems that biologists applying game theory sometimes cannot evade.

Repeated games

Nobody would argue with Roughgarden (2009) when she advocates not making a model more complicated than necessary, but complications are sometimes necessary. More will be said on this subject when talking about cooperative game theory, but the point that needs to be made while we are still on the subject of noncooperative theory is that raising a family is not an event that can be modelled as taking place at a single instant Simultaneous-move games like the Battle of the Sexes are therefore inadequate to represent the problem. Realism demands recognizing that nurture is an ongoing activity in which the parents make decisions over time that are likely to depend on how the other parent has behaved in the past.

Roughgarden (2009) recognizes the importance of time when she observes that the male and female may arrange their nurturing duties, so that one is always catching worms and the other guarding the nest, but that they can split these duties so that sometimes it is the male who goes foraging and sometimes the female. In entertaining this possibility, Roughgarden’s is implicitly treating her Battle of the Sexes as the stage game in an infinitely repeated game with infinitely patient players. In real life, it will not be the same game that gets repeated every time. Nor will the players be infinitely patient. Nor will the game be played infinitely often. But it seems reasonable to follow Roughgarden in considering her idealized case first on the grounds that it is lot easier to think about than a more realistic model.

Repeated games have immensely large strategy sets. When the game that is being repeated is to be played again at the nth stage of the repeated game, each player’s choice of action may be contingent on the whole history of play in all the previous n−1 stages. However, a theorem called the folk theorem allows us to study the Nash equilibrium outcomes of the repeated game without the need to write down the game’s gigantic strategic form. [After Nash’s paper of 1951, a number of researchers independently realized the implications for repeated games, and so the folk theorem is not attributed to any particular person (Aumann & Maschler, 1995)].

The version of the folk theorem about to be described concerns infinitely repeated games between players who are infinitely patient, and for whom the history of past play is common knowledge. More general versions exist, but only a little is known about the case in which the players may only be able to monitor each other’s past play to a limited extent (Mailath & Samuelson, 2006).

One first finds the convex hull – the set of all convex combinations – of the set of payoff pairs available in the stage game. If the players could write binding pre-play contracts, all of the payoff profiles in this convex hull would be available as viable agreements to the players, because they could implement whichever point they agreed upon by using a suitable random device before the play of the game. For this reason, the convex hull is called the cooperative payoff region of the game. The folk theorem says that any point in the cooperative payoff region which assigns all players at least their minimax payoff is the long-run average payoff profile of some Nash equilibrium of the repeated game.

In broad terms, the folk theorem says that the players of a repeated game can enjoy all the fruits of cooperation without any legal system or police force, because Nash equilibria are self-enforcing – nobody can profit by deviating from a Nash equilibrium unless someone else deviates first.

Figure 4 illustrates the folk theorem for the repeated Battle of the Sexes. (In this game, the players' minimax payoffs coincide with their payoffs at the game’s mixed Nash equilibria, but no such result need to be true in general.)

image

Figure 4.  Payoff regions and equilibrium outcomes. The shaded region sketched in the left diagram is the noncooperative payoff region of the one-shot Battle of the Sexes. It consists of all payoff pairs that are possible if the Battle of the Sexes is played just once and the players choose (mixed) strategies independently. The stars show the three Nash equilibrium outcomes of the one shot game. The shaded region on the right is the cooperative payoff region of the one-shot game. It consists of all payoff pairs that are possible in the one-shot Battle of the Sexes if the players make a binding pre-play agreement to play according to the outcome of a jointly observed random event. The deeply shaded part of the diagram on the right shows all payoff pairs that are available on average as Nash equilibrium outcomes if the game is played infinitely often by infinitely patient players.

Download figure to PowerPoint

Equilibrium selection problem

Realistic games commonly have many Nash equilibria. The equilibrium selection problem is to decide which of these equilibria (if any) is to serve as a prediction of how the game will be played. Evolutionary game theory enjoys a major advantage over traditional game theory in addressing this problem, because one can bring to bear data on the dynamic process that led to one equilibrium being selected rather than another.

The Battle of the Sexes is a canonical example in game theory, because it shows that the equilibrium selection problem can arise even in very simple circumstances. Infinitely repeated games provide an extreme example at the other end of the complexity spectrum, because almost any outcome of interest can be supported by Nash equilibrium strategies.

Game theorists once had hopes of solving the equilibrium selection problem in rational game theory by refining the notion of a Nash equilibrium. Subgame-perfect equilibria are one such refinement. Evolutionary stable strategies can be regarded as another. Neither helps for the equilibrium selection problem in repeated games. Subgame-perfect equilibria do not help, because the folk theorem survives virtually unchanged if the notion of a Nash equilibrium is replaced by that of a subgame-perfect equilibrium. Evolutionary stable strategies do not help, because no pure strategy can be an ESS in a nontrivial repeated game.

Most biologists know Axelrod’s (1984)Evolution of Cooperation, which studies the repeated Prisoners’ Dilemma. It is worth noting in passing that nearly all the claims Axelrod makes for the particular strategy tit-for-tat do not survive serious scrutiny, and so one cannot regard tit-for-tat as a solution to the equilibrium selection problem in the repeated Prisoners’ Dilemma (Binmore, 1996, chapter 3). It is not even true that tit-for-tat is an ESS as claimed, because a population all playing tit-for-tat can obviously be invaded by the strategy that always cooperates no matter what. Tit-for-tat is only a weak ESS, which is inadequate for stability (Binmore and Samuelson, 1990).

In the human species, solutions to the equilibrium selection problem are sometimes embedded in our culture. For example, Americans solve the equilibrium selection problem in the Driving Game by driving on the right. We commonly solve the equilibrium selection problem in games that arise less frequently by bargaining about which equilibrium to employ before the game is played. Without language, animals cannot bargain like humans, but Roughgarden (2009) convincingly explains how certain animal behaviours can be regarded as an implicit form of bargaining. The question then arises as to whether bargaining models developed by economists for human applications can be taken off the shelf and applied to bargaining between animals without much in the way of modification. I think not, but insofar as economic models of bargaining are taken off the shelf as a first step towards a more sophisticated theory, it is important that the models be used as their creators intended. Otherwise, there is a risk that the whole approach will be discredited.

Cooperative game theory

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

What counts as cooperation? Game theorists prefer not to tie themselves down to a formal definition, because the phenomenon is so diverse, but biologists like West et al. (2007, p. 49) are less timid. However, it seems to be universally agreed that full cooperation will result in players coordinating on an efficient outcome of whatever game they are playing, where a (Pareto) efficient outcome is defined as one in which no other outcome is available that all players prefer.

Why does it make sense for players to cooperate in some situations and not in others? When seeking to answer such questions, we have no choice but to use the methods of noncooperative game theory. Noncooperative game theory is therefore not restricted to the study of conflict, as Roughgarden (2009) imagines; it includes the study of games in which any cooperation that arises is fully explained by the choice of strategies the players make.

Cooperative game theory differs from noncooperative game theory in abandoning any pretension at explaining how and why cooperation is sustained. It postulates instead that the players have access to an unmodelled black box whose contents somehow resolve all such problems. In management jargon, cooperative game theory assumes that the problem of how cooperation is sustained is solved ‘off-model’ rather than `on-model’ as in noncooperative theory.

The economics literature takes for granted that the cooperative black box contains a pre-play negotiation period, during which the players are free to sign whatever agreements they choose about the strategies to be used in the game they are about to play. These pre-play agreements are understood to be binding. Once the players have signed an agreement, there is no way they can wriggle out of their contractual obligations should they later prove to be inconvenient.

This is why the cooperative payoff region of the one-shot Battle of the Sexes in Fig. 4 consists of the entire triangle with vertices (0,0), (4,2) and (2,4). The players can make a binding agreement to randomize over these pure outcomes and so achieve any pair of expected payoffs in this region. (The fourth pure strategy outcome (1,1) is already inside the region.) The efficient agreements are those on the line segment joining (4,2) and (2,4).

The immediate problem that arises when seeking to apply cooperative game theory in biology is that animals cannot sign binding agreements. They may behave as though they have negotiated an agreement, but what stops them cheating on their implicit agreement if some preferable option should arise? For this reason, it is very important in biological applications that the only agreements attributed to animals should be those in which the players end up playing Nash equilibria. (Sometimes it is argued that Aumann’s (1987) notion of a correlated equilibrium should replace Nash equilibria in such assertions, but the suggestion only muddies the water, since a correlated equilibrium is merely a Nash equilibrium in an augmented game obtained by prefixing the original game with a chance move that sends correlated signals to each player.)

The reason for insisting on restricting attention to agreements on Nash equilibria is that the agreements are then self-policing. This is why it is important that Roughgarden’s Battle of the Sexes be interpreted as the stage game of a repeated game in which all the agreements of interest can be implemented as Nash equilibrium outcomes. The folk theorem then justifies what would otherwise be a tendentious claim: namely that all agreements of any interest in the cooperative payoff region are attainable without needing to hypothesize some form of external enforcement that is usually absent in the animal world.

For example, Roughgarden (2009) example of an efficient implicit agreement in which the male spends 43% of his time guarding the nest while the female forages, and the female spends 57% of her time guarding the nest while the male forages corresponds to a point on the line joining (4,2) and (2,4) in Fig. 4. Any such point can be implemented as a Nash equilibrium in the repeated game by requiring that any player who deviates from the agreed pattern of time-sharing behaviour be punished by the other player switching permanently to the strategy that implements the mixed equilibrium of the game.

The proof of the folk theorem amounts to confirming that this argument works in general. In more complicated cases, the mixed equilibrium payoffs must be replaced by the players’ minimax payoffs, because the worst punishment that that can be inflicted on opponents in a repeated game consists of holding them to their minimax payoffs.

Coalitional forms

Once one has identified the viable agreements in the underlying game, one can often throw away the rest of its strategic structure when attempting a cooperative analysis. In cooperative game theory, one is therefore usually only told the payoff profiles that are available as potential agreements to each possible coalition of players. Such coalitional (or characteristic) forms can be regarded as a further step on the reductive road that begins with the extensive form of a game.

Given a coalitional form, it is usual to write down sets of axioms that characterize a ‘solution’ of the game in terms of the game’s coalitional form and nothing else. It is important to be aware that large numbers of such axiom systems have been proposed, each leading to a different solution concept.

How does one decide whose solution concept – if any – to apply in any given situation? By and large, economists and political scientists who appeal to cooperative game theory tend to pick whichever solution concept comes closest to confirming their prejudices. However, the Nash programme offers a more scientific way of proceeding.

Nash programme

Nash (1951) pointed out that any negotiation is itself a species of game, in which the moves are everything the players may say or do while bargaining. If we model any bargaining that precedes a game G in this way, the result is an enlarged game N. A strategy for this negotiation game first tells a player how to conduct the pre-play negotiations, and then how to play G depending on the outcome of the negotiations. We cannot hope to build a model of N that takes account of all the irrational twists and turns of real-life negotiations, but economists think that fairly simple models are sometimes adequate to capture all the strategic considerations that are genuinely relevant.

Negotiation games must be studied without presupposing pre-play bargaining, all pre-play activity having already been built into their rules. Analysing them is therefore a task for noncooperative game theory. This means looking for their Nash equilibria, in the hope that the equilibrium selection problem will prove amenable.

When negotiation games can be solved successfully, we have a way of checking up on cooperative game theory. If a cooperative solution concept says that the result of a rational agreement on how to play G will be s, then we should also obtain s from solving N.

Perhaps the most important point this paper has to make is that to accept the Nash programme is to deny Roughgarden’s (widely shared) belief that cooperative game theory is a radically different approach to noncooperative theory. Nash regarded the two approaches as complementing each other. If one follows Nash, one can therefore make use of ideas from cooperative game theory without feeling any need to denounce the neo-Darwinian synthesis to which Roughgarden is so hostile.

Nash bargaining solution

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

Nash’s (1951) technical contribution to noncooperative game theory is Nash equilibrium. Nash’s (1950) technical contribution to cooperative game theory is the Nash bargaining solution. Roughgarden mistakenly rejects the former and embraces the latter in the context of her nurturing game. In doing so, she echoes a common misunderstanding when she says that the Nash bargaining solution is to be interpreted as scheme for fair arbitration (Roughgarden, 2009, p. 155). (Nash’s scale invariance axiom rules out such an interpretation because a criterion that cannot compare the welfare of the two players obviously cannot be used to make fairness judgements.)

In the case of two players, either the coalition of both players will form to implement a cooperative agreement, or else no agreement will be reached. The set of possible payoff pairs that can serve as potential agreements is called the feasible set of the bargaining problem. The payoff pair that results in the event that no agreement is reached is commonly called the disagreement point or status quo. I prefer to call it the deadlock point, as disagreement may arise in more than one way. Nash gave the following list of axioms that are intended to characterize a unique rational outcome for a bargaining problem couched in such terms.

  • Axiom 1. Efficiency

  • Axiom 2. Scale invariance

  • Axiom 3. Symmetry

  • Axiom 4. Independence of Irrelevant Alternatives

This is not the place to rehearse the precise statements of these axioms (Binmore, 2007, chapter 16). All that is important here is that Nash (1950) showed that the axioms determine a unique outcome in the feasible set, called the Nash bargaining solution of the problem. It is located at the point in the feasible set at which the product of the players’ gains over their disagreement payoff is largest. This product is called the Nash product. Figure 5 illustrates both how the Nash bargaining solution can be identified as the midpoint of a certain line segment, and how the existence of outside options alters the outcomes to be regarded as feasible.

image

Figure 5.  Nash bargaining solution. The feasible sets are shaded. The deadlock points are the payoff profiles that will result if the bargaining is prolonged indefinitely. The breakdown points are the payoff profiles that will result if the negotiations are abandoned and the players take up their best outside options. The feasible sets are constrained by the breakdown payoffs, because nobody will agree to less than they can have elsewhere. Note that the precise location of the breakdown point is irrelevant to the location of the Nash bargaining solution in the figure on the left, but not in the figure on the right. Note also that the Nash bargaining solution can be found as the midpoint of a certain line segment without needing to refer to the Nash product.

Download figure to PowerPoint

Provided that the players have equal deadlock payoffs and outside options are irrelevant, we only need axioms 1 and 3 to locate the Nash bargaining solution for the feasible set of the repeated Battle of the Sexes shown in Fig. 4. Efficiency says that the solution must lie on the line joining (4,2) and (2,4). Symmetry says that the solution to symmetric problems must be symmetric. For the repeated Battle of the Sexes, the Nash bargaining solution is therefore (3,3), which means that the animals both spend half their time foraging and half guarding the nest, each staying home when the other is away.

To obtain a less trivial result, it is necessary to look at an asymmetric bargaining problem. For example, if we locate the disagreement point at (2,3), the Nash bargaining solution becomes (5/2,7/2), which is implemented by having the male forage 25% of the time and the female 75% of the time.

Team fitness?

Perhaps I am being dense, but Roughgarden’s way of trying to apply the Nash bargaining solution in biology seems vulnerable to the same criticisms that she directs against group selection arguments elsewhere in her book. She models the parents as a coalition that seeks to maximize the Nash product, which she calls the ‘team fitness function’. Economists would call her ‘team fitness function’ an example of a social welfare function – the payoff function for a whole community treated as an indivisible bloc.

Roughgarden misuses the word team in much the same way that she misuses the word competitive. Both terms properly refer to kinds of games rather than to ways of playing games. A team game is usually understood to be a game of perfect coordination (or a common-interest game) in which the players’ payoffs are the same in each cell of the strategic form. So players in team games want to cooperate because their interests are identical – not because they have given up seeking to promote their own interests in favour of some entity larger than themselves.

Roughgarden’s introduction of a social welfare function is at variance with Nash’s motivation in formulating his bargaining solution. His players are not modelled as trying to do anything other than maximize their own individual payoffs. If the bargaining procedure they employ satisfies his axioms, they will indeed end up at an outcome at which the Nash product is maximized. However, it does not follow that the players end up there, because they are jointly trying to maximize Nash’s product any more than the molecules in a room end up maximizing entropy because that is what they are jointly trying to do.

This particular mistake is easily avoided when applying Nash’s axiomatic bargaining theory in biology, but other difficulties also arise.

Symmetry?

Without the symmetry axiom, one is led to an asymmetric version of the Nash bargaining solution in which the players’ gains over their disagreement payoffs are raised to a power before multiplying them to obtain a generalized Nash product. The ratio of these powers reflects the relative ‘bargaining power’ of the players. Should we join Roughgarden in her implicit assumption that males and females have equal bargaining power? I guess the answer depends on the species, but assuming symmetry seems a natural way to begin.

Scale invariance?

This consideration seems fatal for Roughgarden’s appeal to Nash’s axioms. Scale invariance means that each player’s payoffs can be treated like degrees on a temperature scale. That is to say, we can change the origin and the unit on their payoff scales without altering anything that matters.

It is obvious that any such rescaling of the players payoffs will not affect the Nash equilibria of the game, but Nash’s inclusion of scale invariance among his axioms is a signal that it might well affect how a bargaining problem is solved. Do we want to join Nash in explicitly ruling out the possibility that rescaling a player’s payoffs might change the players’ bargaining strategies? When the original payoffs are fitness, it seems to me that this would be a really bad mistake.

For example, in some species of birds, parents raising a brood are sometimes assisted by younger relatives. Suppose the father in the Battle of the Sexes were to die and be replaced by a nephew. The nephew’s degree of relatedness to the cousins he would then be nurturing is one-fourth that of the father. Ignoring the costs and so paying attention only to benefits, a crude assessment of the situation would then require that each of the dead father’s payoffs be reduced by 25% in the game now played by the mother and the nephew. Do we want to say that this change will make no difference to our analysis of the problem? That the mother and the nephew will resolve their bargaining problem exactly as the mother and the dead father would have done?

I think that such examples show that scale invariance is unacceptable as an axiom in games played between parents (even if we were not going to encounter problems later with shifting the zeros on the payoff scales as well as with rescaling the units). Various axioms systems have been proposed that replace scale invariance (which says that the players’ payoffs cannot meaningfully be compared at all) with an axiom that insists on full interpersonal comparison. These systems typically characterize the egalitarian (or proportional) bargaining solution (Binmore, 2007, chapter 18). However, in the nurturing problem as proposed by Roughgarden, we have a situation in which the payoffs are partially comparable in manner that I do not believe has been studied axiomatically at all.

To expand on this point, the fitness of parents in a nurturing game obviously depends to a large degree on the survival of their common offspring. But they must have divergent interests as well if it is going to be worth appealing to a bargaining argument. In the case of exclusively sexual species, these divergent interests presumably relate to the fitness they will enjoy when paired with new partners in the future or with alternative partners in the present. The part of their payoffs that refer to their common offspring will be directly comparable; the part that refers to their divergent interests will not.

Identifying a disagreement point

Roughgarden (2009) appeals to Nash’s (1953) threat theory in seeking to identify a disagreement point when applying the Nash bargaining solution to her version of the Battle of the Sexes. This seems odd given her hostility to competitive models, because nothing in bargaining theory is more competitive than a Nash threat game.

Nash (1953) models the players as opening their negotiations by making irrevocable threats about how they will play the Battle of the Sexes if the negotiations that are about to take place end in disagreement. The payoff pair that will result if these threats are carried out is called the threat point. One then uses this threat point as the disagreement point in an application of the Nash bargaining solution to determine what payoff each player will receive from each possible pair of threats.

In the Battle of the Sexes, the players’ final payoffs in the threat game always sum to 6 [because efficient outcomes lie on the line joining (4,2) and (2,4)]. Any change of outcome in the threat game that is good for one player is therefore bad for the other.

A Nash equilibrium in such a strictly competitive game requires both players to use their maximin strategies. In the threat game derived from the Battle of the Sexes, each player’s maximin threat is to guard the nest irrespective of what the other player may do. The disagreement point (1,1) that results from both players making these threats is symmetric, and we have seen that the final result is then that each player shares the nurturing duties equally. Roughgarden considers an asymmetric version of the problem where the final answer is not so obvious, but the technical issues involved are of no particular interest.

Nash’s theory of threats has fallen out of favour with economists interested in describing human behaviour, because of doubts over the extent to which the threats that people make in bargaining situations are genuinely credible. I may threaten to burn down our house if my wife does not allow me to watch my favourite television programme, but will she believe me? One may respond that the threats that animals make are written in their genes, but then it becomes necessary to tell some plausible story of how such complex behaviour evolved, which does not seem to be an easy task in the context of nurturing relationships. Biological commentaries on these and related issues are provided by Cant & Johnstone (2009) and McNamara & Houston (2002).

My own view is that Roughgarden goes awry in proposing what is at risk of being seen as the one-and-only way to determine a disagreement point when applying the Nash bargaining solution to biological problems. In many cases, the appropriate choice of a disagreement point may be completely independent of the payoffs written into whatever strategic form corresponds to Roughgarden’ Battle of the Sexes.

Noncooperative bargaining models

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

I am puzzled by Roughgarden’s attitude to the kind of noncooperative bargaining models that are necessary to implement the Nash programme in her context. She reviews various ways in which animals might operate some analogue of a noncooperative negotiation game reasonably favourably, but then asks whether animals are really ‘stuck stumbling upon cooperative outcomes only after flopping down exhausted from the day’s combat’ (Roughgarden, 2009, p. 154). It seems to me that one might equally ask whether it is really true that human lovers stumble on the idea of getting married only after exhausting themselves in negotiating a prenuptial contract. But the fact that the answer is no does not imply that prenuptial negotiations are irrelevant to the marriages of seriously rich couples.

Such a viewpoint does not imply that one is ‘taking the altruism out of altruism’ (De Waal, 2008). Some species of animals that form sexual partnerships doubtless experience their own analogues of what humans call love and affection. It is sometimes thought that accepting this obvious fact implies that noncooperative game theory cannot be applied – a mistake encouraged in biology by its past focus on evolutionary games in which the payoffs are very reasonably taken to be the fitness of ‘selfish’ genes. However, if one accepts Roughgarden’s equally reasonable suggestion that it is sometimes worth considering a behavioural tier built upon the traditional evolutionary tier, then it is necessary to widen one’s horizons, because it is not obvious what payoffs should be written in behavioural games. Orthodox game theory – whether cooperative or noncooperative – offers no guidance on this matter because it is entirely concerned with how players can best achieve whatever their goals happen to be, and not at all with making judgment about what these goals are or ought to be.

In brief, it is not the business of game theorists to tell Juliet that she is unwise to love Romeo or Shylock that he would do better not to hate Antonio. Game theorists assume that the loves and hates of the players are subsumed into the payoffs of the game they play and have therefore been taken account of in setting up the problems that game theorists seek to solve. The same goes in biology. In particular, when I criticize Roughgarden’s use of the Battle of the Sexes as a model of sexual partnerships, I do so as an amateur biologist. It is only when I criticize her analysis of the game that I speak as a professional game theorist.

The immediate point is that an assessment of the following two noncooperative bargaining models needs to put aside the mistaken idea that noncooperative models necessarily assume that all behaviour is selfishly motivated. Rather than opposing their use, Roughgarden should welcome them with open arms, because they provide two possible defences of the Nash bargaining solution that do not rely on the scale invariance axiom (which actually does assume that the players negotiate without any concern for the welfare of their partner).

Nash’s demand game

The Nash demand game is Nash’s own contribution to the Nash programme. Johnstone and Hinde (2006) are the nearest the biology literature seems to come to his model.

Nash (1950) models the negotiation game N very simply as the exchange of simultaneous take-it-or-leave-it demands. If the pair of demands lies in the feasible set of the problem, both players receive their demands. Otherwise both players receive their disagreement payoffs. The equilibrium selection problem goes away if there is some uncertainty over the precise location of the boundary of the feasible set (Binmore & Dasgupta, 1987, chapter 2]). All Nash equilibria of the negotiation game then approximate the Nash bargaining solution. Notice that it is explicit in this model that both players act to maximize their own expected payoff, although the outcome is as if they had jointly acted to maximize Roughgarden’s team fitness function.

An advantage of the Nash demand game is that Harsanyi’s theory of incomplete information can be used to extend it to the case of incomplete information over the preferences of the players (Binmore, 1987). All approximately efficient Nash equilibria then approximate a multi-player version of the Nash bargaining solution axiomatized by Harsanyi & Selten (1972). The Nash product in this theory has a factor for each potential player raised to a power equal to the probability that he or she will be chosen to play.

The difficulties with credible commitment that afflict Nash’s threat theory also apply to the Nash demand game. I may say that I am now making my last-and-final-offer, but why should anyone believe me? As Maynard Smith (1982) observes in commenting on pre-combat displays in male finches, populations programmed to believe aggressive signals are vulnerable to invasion by mutants who send the signal but would actually back down if challenged (Robson, 1990).

On the other hand, if such objections are thought unpersuasive when considering threats, why should they be thought persuasive when considering demands? If so, then Roughgarden’s reliance on the Nash threat theory to determine the location of a disagreement point should also lead her to accept Nash’s demand game as an adequate model of how mating couples bargain. She is unlikely to see things this way, but perhaps the paradox will persuade her that there may be some merit in Rubinstein’s (1982) very different bargaining model.

Rubinstein’s alternating-offers game

Roughgarden (2009) speaks of a ‘war of attrition’ when discussing the value of delay as a bargaining ploy. It is such issues of timing that motivate Rubinstein’s model of bargaining. McNamara et al. (1999) take this line in a biological model, although their approach is complicated by their taking issues of incomplete information on board at the same time.

In Rubinstein’s (1982) bargaining game, the players alternate in making proposals on who should contribute how much to a joint venture until a proposal is accepted. (One can vary many of the details in his model without changing the basic conclusion. In particular, strict alternation of proposals is not required. Nor need the players be devoid of any income while negotiating.) Their incentive for agreeing earlier rather than later is that each discounts the unproductive passage of time at a fixed rate. It turns out that Rubinstein’s game has a unique subgame-perfect equilibrium.

The fact that Rubinstein depends on the idea of a subgame-perfect equilibrium is something of a problem. His theory can be made to work with weaker refinements of Nash equilibrium, but I retain doubts about both the rationality and the evolutionary arguments offered in their defence. However, the Nobel laureates Aumann (1995) and Selten (1983) are less sceptical. It is also a pity that versions of his model with incomplete information have many equilibria among which it is not easy to make a selection.

My own contribution to Rubinstein’s theory was to show that the equilibrium outcome in his alternating-offers game approximates an asymmetric version of the Nash bargaining solution provided that the interval between successive proposals is sufficiently small (Binmore & Dasgupta, 1987, chapter 2). The bargaining powers in the asymmetric Nash product are the inverses of the players’ discount rates – so that impatient players end up with less than patient players.

A simple model of parenting will ignore possible differences in the players’ attitudes to time by assuming that their discount rates are determined by nothing more than the probability pt that a (small) delay of t in reaching an agreement will result in a failure of the parents to succeed in raising a brood at all. The players’ discount factors will then be the same. Allowing t to become vanishingly small then leads us to the symmetric Nash bargaining solution.

In this application of the Nash bargaining solution, the disagreement point is no longer determined by threats. The disagreement point in Rubinstein’s setting is the pair of payoffs that the players would receive if all offers made were always refused. (In equilibrium, the very first offer will be accepted, but the offer that is made in equilibrium is determined by what would happen if the proposer offered less.) I call this pair of payoffs the deadlock point (Binmore, 2006, chapter 16).

The nature of the deadlock point in Rubinstein’s model suggests that we consider again the interpretation of the payoffs in Roughgarden’s Battle of the Sexes. Section 4 cautions against arbitrarily changing the units in which a player’s payoffs are measured. This section cautions against making an arbitrary choice of where to locate a player’s zero payoff. The reason is that the deadlock point in the simplest version of Rubinstein’s model will necessarily be (0,0), because anything discounted to infinity will be assigned a payoff of zero. (For ways to calculate disagreement points in more sophisticated models, see Binmore, 2007.)

However, we have already assigned the payoff pair (0,0) to the event in the Battle of the Sexes in which both parents choose to catch worms at the same time. But leaving the nest unguarded for some short period is surely not so bad as never getting to breed at all. If (0,0) is assigned to the deadlock point, something positive must therefore be added to all the payoffs in the Battle of the Sexes before applying the Nash bargaining solution, for which purpose some new empirical data will be required.

Outside options

Further empirical data will also be necessary to assess the outside options of the players to determine where to locate what I call the breakdown point. Such outside options are absent from Roughgarden’s analysis but are regarded as being of major importance both in economics and in evolutionary biology, for example, Kokko & Jennions (2008).

The players’ outside options are their expected payoffs if someone unilaterally abandons the negotiations to take up their best opportunity elsewhere. Animals, for example, may give up trying to come to terms with the partner with whom they are currently flirting, to seek an alternative partnership with another mate. Individuals belonging to species that are also able to reproduce asexually may choose to take up this outside option.

It is obvious that neither player will agree to a deal with their current partner if they expect to acquire a better deal with another partner or by exercising some other superior outside option. It is therefore not surprising that applying the Rubinstein model to the case with outside options simply leads to the same conclusion as before, but with all outcomes that award one or more of the players less than their outside option discarded from the feasible set.

A common mistake is to take the pair of outside options (the breakdown point) to be the disagreement point in the Nash bargaining solution (instead of the deadlock point). This might be the right thing to do if the players could credibly threaten to abandon the negotiations if their current proposal is turned down, but the Rubinstein model implicitly assumes that threats will only be carried out if it is in the interests of the threatener to do so after the threat has failed.

Conclusion

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

The distinction between cooperative and noncooperative game theory is easy to misunderstand. However, it is essential to avoid Roughgarden (2009) mistake of supposing that cooperative game theory is based on the assumption that the players will seek to maximize the value of a common payoff. On the contrary, in both cooperative and noncooperative game theory, the players are assumed to seek to maximize their own individual payoffs (although these payoffs need not be selfish). One then tries to predict the players’ behaviour by finding the Nash equilibria of whatever game they are playing. If the game is suitably structured (as in a sexual partnership), the players’ behaviour at some of these equilibria will satisfy the criteria we associate with cooperation. Cooperative game theory differs from noncooperative game theory only in seeking to say things about the properties of such cooperative equilibria without getting bogged down in a detailed discussion of the precise strategies necessary to sustain them.

To apply the economic theory of bargaining in biology, it is therefore unnecessary to reject the modern neo-Darwinian synthesis. On the contrary, the ideas that Roughgarden mistakenly supposes apply only when Nature is red in tooth and claw are the same ideas on which the bargaining theory to which she appeals is based. To apply bargaining theory, neo-Darwinians need only be ready to look at more complicated games than has been common hitherto in evolutionary theory. In particular, Rubinstein’s noncooperative model provides better support for Roughgarden’s bargaining conclusions than the axiomatic theory she mistakenly supposes to represent a gentler paradigm.

Acknowledgments

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References

I gratefully acknowledge the financial support of both the British Economic and Social Research Council through the Centre for Economic Learning and Social Evolution (ELSE) and the British Arts and Humanities Research Council through grant AH/F017502.

References

  1. Top of page
  2. Abstract
  3. Building on the past
  4. Noncooperative game theory
  5. Cooperative game theory
  6. Nash bargaining solution
  7. Noncooperative bargaining models
  8. Conclusion
  9. Acknowledgments
  10. References
  • Aumann, R. 1987. Correlated equilibrium as an expression of Bayesian rationality. Econometrica 55: 118.
  • Aumann, R. 1995. Backward induction and common knowledge of rationality. Games Econ. Behav. 8: 619.
  • Aumann, R. & Maschler, M. 1995. Repeated Games with Incomplete Information. MIT Press, Cambridge, MA.
  • Axelrod, R. 1984. The Evolution of Cooperation. Basic Books, New York.
  • Binmore, K. 1987. Bargaining with incomplete information. In: Economics of Bargaining (K.Binmore & P.Dasgupta, eds), pp. 2135. Cambridge University Press, Cambridge.
  • Binmore, K. 1996. Just Playing: Game Theory and the Social Contract II. MIT Press, Cambridge MA.
  • Binmore, K. 2007. Playing for Real. Oxford University Press, New York.
  • Binmore, K. & Dasgupta, P. 1987. The Economics of Bargaining. Blackwell, Oxford.
  • Binmore, K. & Samuelson, L. 1990. Evolutionary stability in repeated games played by finite automata. J. Econ. Theor. 57: 278305.
  • Binmore, K. & Samuelson, L. 1997. Muddling through: noisy equilibrium selection. J. Econ. Theor. 74: 235265.
  • Binmore, K., Rubinstein, A. & Wolinsky, A. 1982. The Nash bargaining solution in economic modelling. Rand J. Econ. 17: 176188.
  • Binmore, K., Gale, J. & Samuelson, L. 1993. Learning to be imperfect: the ultimatum game. Games Econ. Behav. 8: 5690.
  • Cant, M. & Johnstone, R. 2009. How threats influence the evolutionary resolution of within-group conflict. Am. Nat. 173: 759771.
  • Cressman, R. 2003. Evolutionary Dynamics and Extensive Form Games. MIT Press, Cambridge, MA.
  • De Waal, F. 2008. Putting altruism back into altruism: the evolution of empathy. Annu. Rev. Psychol. 59: 279300.
  • Harsanyi, J. 1967. Games with incomplete information played by “Bayesian” players, {I}–{III}. Man. Sci. 14: 159182.
  • Harsanyi, J. & Selten, R. 1972. A generalized Nash solution for two-person bargaining games with incomplete information. Man. Sci. 18: 80106.
  • Hofbauer, J. & Sigmund, K. 1998. Evolutionary Games and Population Dynamics. Cambridge University Press, Cambridge.
  • Johnstone, R. & Hinde, C. 2006. Negotiation over offspring care – how should parents respond to each other’s efforts. Behav. Ecol. 17: 818827.
  • Kokko, H. & Jennions, M. 2008. Parental investment, sexual selection and sex ratios. J. Evol. Biol. 21: 919948.
  • Luce, R. & Raiffa, H. 1957. Games and Decisions. Wiley, New York.
  • Mailath, G. & Samuelson, L. 2006. Repeated Games and Reputations: Long-Run Relationships. Oxford University Press, New York.
  • Maynard Smith, J. 1982. Evolution and the Theory of Games. Cambridge University Press, Cambridge.
  • Maynard Smith, J. & Price, G. 1972. The logic of animal conflict. Nature 246: 1518.
  • McNamara, J. & Houston, A. 2002. Credible threats and promises. Philos. Trans. R. Soc. Lond. B Biol. Sci. 357: 16071616.
  • McNamara, J., Gasson, C. & Houston, A. 1999. Incorporating rules for responding into evolutionary games. Nature 401: 368371.
  • Nash, J. 1950. The bargaining problem. Econometrica 18: 155162.
  • Nash, J. 1951. Non-cooperative games. Ann. Math. 54: 286295.
  • Nash, J. 1953. Two-person cooperative games. Econometrica 21: 128140.
  • Periera, M., Bergman, A. & Roughgarden, J. 2003. Socially stable territories: the negotiation of space by interacting foragers. Am. Nat. 161: 143152.
  • Robson, A. 1990. Efficiency in evolutionary games: Darwin, Nash, and the secret handshake. J. Theor. Biol. 144: 379396.
  • Roughgarden, J. 2009. The Genial Gene. University of California Press, Berkeley and Los Angeles.
  • Rubinstein, A. 1982. Perfect equilibrium in a bargaining model. Econometrica 50: 97109.
  • Selten, R. 1975. Reexamination of the perfectness concept for equilibrium points in extensive-games. Int. J. Game Theor. 4: 2555.
  • Selten, R. 1983. Evolutionary stability in extensive 2-person games. Math. Soc. Sci. 5: 269363.
  • Trivers, R. 1971. Evolution of reciprocal altruism. Quar. Rev. Biol. 46: 3556.
  • Von Neumann, J. 1928. Zur Theorie der Gesellschaftsspiele. Math. Ann. 100: 295320.
  • Von Neumann, J. & Morgenstern, O. 1944. Theory of Games and Economic Behavior. Princeton University Press, Princeton.
  • West, S., Griffin, A. & Gardner, A. 2007. Social semantics: altruism, cooperation, mutualism, strong reciprocity and group selection. J. Evol. Biol. 20: 415432.