Brian Skyrms The Stag Hunt and the Evolution of Social Structure.
The Stag Hunt and the Evolution of Social Structure Cambridge University Press, 2004, 166 pages
I would recommend this book very highly to all those with an interest in understanding social behaviour in organizations, markets and societies. Skyrms' title comes from a well-known passage in Rousseau's writings, where the philosopher poses the dilemma of why we see cooperation in situations where each individual would gain by free-riding or defecting from the cooperative endeavor in other ways. Rousseau cites the example of a stag hunt, which can only succeed if each hunter in a group devotes his/her full attention to the group enterprise. If any of these hunters were to see a hare running past, they would surely be tempted to chase it, and thus be sure of securing a prize compared to the uncertain success of the stag hunt, especially since it depended on none of the other hunters defecting.
Cast in the terms of contemporary game theory, Rousseau's stag hunt is an example of an "assurance game"--that is, a game in which there are high returns to all cooperating and some lower returns to all defectors, and whereby the player who cooperates while others defect gets the lowest payoff of all. Assurance games are unlike the classic Prisoners' Dilemma, where defection is a dominant strategy--that is, the return on defection is always higher than that on cooperation--so that defection is the only Nash equilibrium, at least in the one-shot game. In assurance games there are two equilibria: all cooperate (which yields the highest pay-off) or all defect. Underpinning Skyrms' book is the belief that many social situations are similar to assurance games, rather than to the widely studied prisoners' dilemma: a view that will seem plausible to anyone who has worked in professional services organizations, where the conflict between stag hunters and hare hunters is a fact of daily life. Given the pervasive nature of assurance games, Skyrms sets out to examine the circumstances under which cooperation is likely to emerge as the predominant outcome in stag hunt versus hare hunt situations.
The framework adopted by Skyrms is that of evolutionary game theory and while his book is largely non-technical, some familiarity with evolutionary game theory is helpful (1). In applying the evolutionary approach, Skyrms considers populations where there are initially groups of stag hunters and of hare hunters, and asks what conditions allow one group to dominate the population? The simplest approach to answering this question is to determine critical population shares (usually referred to as 'zones of attraction') at which basic replicator dynamics (namely a mechanism that allows individuals with higher pay-offs to multiply more quickly than others) will result in a move to a population consisting entirely of one type or another. For plausible calibrations of the relevant pay-offs, it emerges that the 'zone of attraction' for stag hunting is very much higher than that for hare hunting. In other words, for stag hunting to emerge as dominant in a population with random encounters and simple replicator dynamics, the initial share of stag hunters needs to be very high. This is because the pay-off to a stag hunter from a random encounter with a hare hunter is very low (usually zero); while hare hunters are guaranteed a positive pay-off. As a result, even a few encounters with hare hunters will reduce a stag hunter's pay-off below the hare hunter level. The replication dynamic then shifts the population balance towards hare hunters in a process that converges to an 'all defect' equilibrium.
These results suggest that cooperation is a fragile flower; but is it really? Skryms examines a number of mechanisms that may make cooperation hardier than the introductory analysis suggests. While the basic mechanisms Skryms considers are location, signaling, and association, in a short overview such as this it is useful to focus on the role of learning. More specifically, we can consider three types of learning: learning about types--namely about whether an individual is a hare hunter or a stag hunter; learning about who to associate with--that is, choosing one's friends and colleagues; and learning about strategy--namely deciding how one will behave with those one associates with.
Skryms' discussion of signaling, which is the essence of learning about types, is perhaps the least satisfactory part of his book. This partly reflects the somewhat tricky nature of the area. In effect, most economists will come to Skryms' discussion with the presumption that for a signal to credibly distinguish among types the cost of giving the signal must vary with the type; so that a costless signal--'cheap talk'--will not allow effective signaling to occur (2). (Having a sawed off pinky and a scar running down one's cheek is a credible signal of toughness; adopting the "crouching tiger, hidden dragon" position and yelling "back off--I'm a kung fu master" is not.) That presumption is, however, not quite correct. What 'cheap talk' does is to expand the strategy space--that is, to increase the set of outcomes agents can choose. Where agents have no interest in deception, and talk is cheap, having a wider range of markers (in a sense, having more boxes to choose) can allow agents to sort themselves by type, increasing efficiency (that is, the combined pay-offs). Reflecting this, Skryms sets out some fascinating simulation results in which in the equilibrium, 'cheap talk' allows screening to occur.
What is not clear, however, is quite how robust these results are. Attaching even a small cost to signaling, or a small gain from deception, seems to me at least capable of eliminating any gain from the signaling Skryms considers. In contrast, learning who to associate with, and what strategy to deploy when one does associate, turn out to be powerful and convincing factors shaping overall outcomes. In particular, Skryms looks at the consequences of learning mechanisms in which the outcomes of past and current interactions affect the probability of future interactions (for example, because one assigns a higher interaction probability to agents with whom one has experienced higher pay-offs). If this process can work relatively quickly, then stag hunting is strongly favoured. This is because stag hunters learn to avoid hare hunters, and only visit other stag hunters, so securing relatively high payoffs. Hare hunters visit each other and stag hunters, but because stag hunters do not visit them, they have fewer interactions in total, and hence relatively lower pay-offs. The replicator dynamics then takes over, shifting the population balance to stag hunting.
Set against the backdrop of this learning about who to associate with, learning about which strategy to deploy turns out to be much more ambiguous. Broadly, there are two ways of thinking about learning about strategy: the first is based on imitation, and involves picking a strategy based on how well it is performing in a reference group (which may be the population as a whole or some narrower group); the second is based on some form of strategic inference, such as choosing the strategy that would have yielded the highest pay-off in the encounters one has experienced to date. Now, whichever of these is adopted, if agents change their strategy easily then defection will triumph. Moreover, the greater the weight of the second type of strategy revision (that is, revision based on strategic inference, such as best response), the quicker and more complete defection's triumph will be.
The underlying mechanism can be grasped by examining a scenario in which sorting is slow (that is, it takes a long time to figure out who your friends are), but learning about strategy is quick, with that learning taking the form of a rule that agents will play on move N+1 the strategy that would have yielded them the best pay-off in move N. In that event, in the early stages, the best response will be heavily weighted to hare hunting, as one's chances of interacting with hare hunters will still be high. As a result, hare hunting will spread, and once it has spread, persist.
Imitation strategies are more forgiving of cooperation, especially when imitation takes the form of "copy the strategy that led to the highest pay-off in the reference group". Imagine a world in which there are pockets of cooperation--pockets that persist because there are travel costs that segment the population and/or because agents can choose (or at least influence) who they interact with. Within those pockets, stag hunters will earn relatively high pay-offs. A rule that induces imitation towards high payoffs will cause cooperation to spread. If additionally, the "circle of influence"--the range of agents each agent examines in considering what behaviour to adopt--is large relative to the "circle of interaction", then cooperation can spread far and quickly.
In short, the factors that prove most important in allowing cooperative solutions to emerge are first, the ability of cooperators to distance themselves from defectors and second, imitation of the strategies that lead to the highest pay-offs in the population, rather than leading to use of what would have been the Nash strategy in the previous iteration.
The mechanisms that Skyrms highlights as leading to cooperative outcomes are interesting because they are non-Hobbesian, at least in the sense of not depending on voluntary abdication of control to a sovereign who then mandates cooperative behaviour (say by imposing penalties on hare hunting defectors). Skyrms' work does not, however, give any reason for thinking that cooperation will always win. Far from it--he highlights just how fragile cooperation is.
This is best illustrated by starting from the "divide the dollar" game. The essence of this game is that two players need to agree on how to divide a dollar; so long as they can agree on shares that add up to no more than 100 per cent, then they each get the share they agreed to. Conventional notions of fairness suggest a 50/50 split; and such a split seems to have that feature of 'salience' that makes for what Thomas Schelling called a 'focal point' in a coordination game. Additionally, and no less interestingly, it is well-known that a 50/50 split is the only evolutionarily stable strategy in the "divide the dollar" game (in the sense that if you start from a population playing the 50/50 split and intrude a few egotistical mutations, those mutations will fail to survive the standard replicator dynamics).
What happens, however, if you superimpose the stag hunting/hare hunting choice with bargaining over the spoils? In period one, in other words, agents get to choose whether to be hare hunters or stag hunters; in period two, they then encounter other agents, and if stag hunters encounter stag hunters, they get a high combined pay-off; and then in period three, the stag hunters have to bargain about how to divide that combined pay-off. The effect of this is to make cooperation more risky, as the stag hunter now runs two risks: the risk of meeting a hare hunter, and getting a pay-off of zero; and the risk that even if s/he meets a stag hunter, that stag hunter will play greedy in period three, reducing the agent's return on cooperation to a lower share or possibly to zero. As a result, for cooperation to emerge, the mechanisms that allow fair-minded participants to distinguish themselves from others must be especially quick. The lessons for social interaction are generally obvious.
The fragility of cooperation also emerges with respect to free-riding. Skyrms illustrates this issue through the "three in a boat" game. This game involves a situation in which there are three players in a boat with two oars; if one happens to be in such a boat with two cooperators, the best response is clearly to free-ride (rather than, say, uselessly paddling the water with one's hands). In contrast, if one is in such a boat with only one cooperator, then the best response is to row (so that the boat gets to the other side). As a result, when the population has a large number of cooperators, the returns on free riding are high.
The implication of this is that "all cooperate" is not a stable equilibrium in the "three in a boat" game, or indeed in any situation where the marginal participant has little or no influence on output but gets to share in that output regardless (3). In effect, for "all cooperate" to persist it would have to overcome the fact that at high levels of cooperation, defection is highly profitable: cooperation and defection are complements, in the sense that an increase in the share of cooperation induces an increase at the margin in the return to defection. However, there is no similar resistance that an "all defect" equilibrium would have to overcome--the gains to cooperating fall, rather than rise, as the share of defectors increases. As a result, the game can readily unravel, with defection becoming universal (4).
Skryms is very careful in his presentation and rigorously resists the temptation to draw broader lessons from the mechanisms he examines. That is a pity, as there is a great deal here to be drawn upon. Consider, for example, his result that fast adaptation of strategy to experience can help undermine cooperation. There is an obvious link here to the role of ethical and moral beliefs in sustaining cooperative behaviour: after all, one feature of having strong ethical and moral beliefs is that the occasional encounter with defectors does not lead us to become defectors ourselves, and hence helps prevent cooperative patterns of behaviour from unravelling. Similar considerations are helpful in understanding why price stickiness is a relatively pervasive feature of many markets (5).
In short, this book provides ample food for thought. It is also written with admirable clarity and surprisingly well edited and referenced--making it even more worth reading.
(1) The level at which evolutionary game theory is set out in DIXIT & SKEATH's excellent primer on Games of Strategy (Norton, 1999) is largely sufficient for an understanding of Skyrms' main results. See also MAILATH G (1998) "Do People Play Nash Equilibrium? Lessons from Evolutionary Game Theory", Journal of Economic Literature, 36, 1347-1374.
(2) One important variant of this is the so-called "handicap principle" in zoology--that having a costly attribute (such as splendid plumage or a gazelle's spectacular leap) credibly signals high underlying genetic quality, as only an entity with such high quality could afford to bear the handicap. See Carl T. BERGSTROM "The theory of honest signaling" (2002), at: http://octavia.zoology.washington.edu/handicap/handicap_intro_1.html
(3) That is, in situations such as pure public goods, where output is non-excludable and essentially parametric with respect to the marginal agent.
(4) This has many similarities to the non-convex pay-off structures that underpin empty core outcomes in cooperative game theory.
(5) This is not to suggest that it is appropriate to draw ready inferences from game theory models to observed behaviour--a temptation cogently criticized in Flavio MENEZES & John QUIGGIN (2005), "Games without Rules", forthcoming in Theory and Decision and available at: http://www.uq.edu.au/economics/rsmg/WP/WPR04_7.pdf
by Henry ERGAS (*)
(*) I am grateful to my colleagues Flavio Menezes and Eric Ralph for stimulating discussions of these issues. The usual disclaimers apply.
|Printer friendly Cite/link Email Feedback|
|Publication:||Communications & Strategies|
|Date:||Oct 1, 2006|
|Previous Article:||Russel Cooper, Gary Madden, Ashley Lloyd & Michael Schipp (Eds) The Economics of Online Markets and ICT Networks.|
|Next Article:||Nicolas Curien & Francois Moreau L'industrie du disque.|