The evolution of cooperation critically depends on the possibility of repeated encounters. It cannot evolve in a one-shot Prisoner’s Dilemma, and it collapses even in an Iterated Prisoner’s Dilemma if the players know they are playing a limited number of rounds, because as the end of the game approaches, each is tempted to defect without fear of retribution. For similar reasons, subsets of players who are stuck with playing against each other—say because they are neighbors who cannot move—tend to be more forgiving than ones who can pick up and choose another neighborhood in which to find partners. Cliques, organizations, and other social networks are virtual neighborhoods because they force groups of people to interact repeatedly, and they too tilt people toward forgiveness, because mutual defection would be ruinous to everyone.
Human cooperation has another twist. Because we have language, we don’t have to deal with people directly to learn whether they are cooperators or defectors. We can ask around, and find out through the grapevine how the person has behaved in the past. This indirect reciprocity, as game theorists call it, puts a tangible premium on reputation and gossip.188
Potential cooperators have to balance selfishness against mutual benefit not just in dealing with each other in pairs but when acting collectively in groups. Game theorists have explored a multiplayer version of the Prisoner’s Dilemma called the Public Goods game.189 Each player can contribute money toward a common pool, which then is doubled and divided evenly among the players. (One can imagine a group of fishermen chipping in for harbor improvements such as a lighthouse, or merchants in a block of stores pooling contributions for a security guard.) The best outcome for the group is for everyone to contribute the maximum amount. But the best outcome for an individual is to stint on his own contribution and be a free rider on the profits from everyone else’s. The tragedy is that contributions will dwindle to zero and everyone ends up worse off. (The biologist Garrett Hardin proposed an identical scenario called the Tragedy of the Commons. Each farmer cannot resist grazing his own cow on the town commons, stripping it bare to everyone’s loss. Pollution, overfishing, and carbon emissions are equivalent real-life examples.)190 But if players have the opportunity to punish free riders, as if in revenge for their exploitation of the group, then the players have an incentive to contribute, and everyone profits.
The modeling of the evolution of cooperation has become increasingly byzantine, because so many worlds can be simulated so cheaply. But in the most plausible of these worlds, we see the evolution of the all-too-human phenomena of exploitation, revenge, forgiveness, contrition, reputation, gossip, cliques, and neighborliness.
So does revenge pay in the real world? Does the credible threat of punishment induce fear in the heart of potential exploiters and deter them from exploiting? The answer from the lab is yes.191 When people actually act out Prisoner’s Dilemma games in experiments, they tend toward Tit for Tat–like strategies and end up enjoying the fruits of cooperation. When they play the Trust game (another version of Prisoner’s Dilemma, which was the game used in the neuroimaging experiments on revenge), the ability of an investor to punish a faithless trustee puts enough fear into the trustee to return a fair share of the appreciated investment. In Public Goods games, when people are given the opportunity to punish free riders, people don’t free-ride. And remember the studies in which participants’ essays were savaged and they had an opportunity to shock their critics in revenge? If they knew that the critic would then get a turn to shock them back—to take revenge for the revenge—they held back on the intensity of the shocks.192
Revenge can work as a deterrent only if the avenger has a reputation for being willing to avenge and a willingness to carry it out even when it is costly. That helps explain why the urge for revenge can be so implacable, consuming, and sometimes self-defeating (as with pursuers of self-help justice who slay an unfaithful spouse or an insulting stranger). 193 Moreover, it is most effective when the target knows that the punishment came from the avenger so he can recalibrate his behavior toward the avenger in the future.194 That explains why an avenger’s thirst is consummated only when the target knows he has been singled out for the punishment.195 These impulses implement what judicial theorists call specific deterrence: a punishment is targeted at a wrongdoer to prevent him from repeating a crime.
The psychology of revenge also implements what judicial theorists call general deterrence: a publicly decreed punishment that is designed to scare third parties away from the temptations of crime. The psychological equivalent of general deterrence is the cultivation of a reputation for being the kind of person who cannot be pushed around. (You don’t tug on Superman’s cape; you don’t spit into the wind; you don’t pull the mask off the old Lone Ranger; and you don’t mess around with Jim.) Experiments have shown that people punish more severely, even at a price that is greater than the amount out of which they have been cheated, when they think an audience is watching.196 And as we saw, men are twice as likely to escalate an argument into a fight when spectators are around.197
The effectiveness of revenge as a deterrent can explain actions that are otherwise puzzling. The rational actor theory, popular in economics and political science, has long been embarrassed by people’s behavior in yet another game, the Ultimatum game.198 One participant, the proposer, gets a sum of money to divide between himself and another participant, the acceptor, who can take it or leave it. If he leaves it, neither side gets anything. A rational proposer would keep the lion’s share; a rational respondent would accept the remaining crumbs, no matter how small, because part of a loaf is better than none. In actual experiments the proposer tends to offer almost half of the jackpot, and the respondent doesn’t settle for much less than half, even though turning down a smaller share is an act of spite that punishes both participants. Why do actors in these experiments behave so irrationally? The rational actor theory had neglected the psychology of revenge. When a proposal is too stingy, the respondent gets angry—indeed, the neuroimaging study I mentioned earlier, in which the insula lit up in anger, used the Ultimatum game to elicit it.199 The anger impels the respondent to punish the proposer in revenge. Most proposers anticipate the anger, so they make an offer that is just generous enough to be accepted. When they don’t have to worry about revenge, because the rules of the game are changed and the acceptor has to accept the split no matter what (a variation called the Dictator game), the offer is stingier.
We still have a puzzle. If revenge evolved as a deterrent, then why is it used so often in the real world? Why doesn’t revenge work like the nuclear arsenals in the Cold War, creating a balance of terror that keeps everyone in line? Why should there ever be cycles of vendetta, with vengeance begetting vengeance?
A major reason is the Moralization Gap. People consider the harms they inflict to be justified and forgettable, and the harms they suffer to be unprovoked and grievous. This bookkeeping makes the two sides in an escalating fight count the number of strikes differently and weigh the inflicted harm differently as well.200 As the psychologist Daniel Gilbert has put it, the two combatants in a long-running war often sound like a pair of boys in the backseat of a car making their respective briefs to their parents: “He hit me first!” “He hit me harder!”201
A simple analogy to the way that misperception can lead to escalation may be found in an experiment by Sukhwinder Shergill, Paul Bays, Chris Frith, and Daniel Wolpert, in which participants placed their finger beneath a bar that could press down on it with a precise amount of force.202 Their instruction was to press down on the finger of a second participant for three seconds with the same amount of force they were feeling. Then the second participant got the same instructions. The two took turns, each matching the amount of force he or she had just received. After eight turns the second participant was pressing down with about eighteen times as much force as was applied in the round that got it started. The reason for the spiral is that people underestimate how much force they apply compared to how much force they feel, so they
escalated the pressure by about 40 percent with each turn. In real-world disputes the misperception comes not from an illusion of the sense of touch but from an illusion of the moral sense, but in both cases the result is a spiral of painful escalation.
In many parts of this book I have credited the Leviathan—a government with a monopoly on the legitimate use of force—as a major reducer of violence. Feuding and anarchy go together. We can now appreciate the psychology behind the effectiveness of a Leviathan. The law may be an ass, but it is a disinterested ass, and it can weigh harms without the self-serving distortions of the perpetrator or the victim. Though it is guaranteed that one side will disagree with every decision, the government’s monopoly on force prevents the loser from doing anything about it, and it gives him less reason to want to do something about it, because he is not conceding weakness to his adversary and has less incentive to carry on the fight to restore his honor. The fashion accessories of Justitia, the Roman goddess of justice, express the logic succinctly : (1) scales; (2) blindfold; (3) sword.
A Leviathan that implements justice at the point of a sword is still using a sword. As we have seen, government vengeance itself can go to excess, as in the cruel punishments and profligate executions before the Humanitarian Revolution and the excessive incarceration in the United States today. Criminal punishment is often harsher than what would be needed as a finely tuned incentive designed to minimize the society’s sum of harm. Part of this is by design. The rationale for criminal punishment is not just specific deterrence, general deterrence, and incapacitation. It also embraces just deserts, which is basically citizens’ impulse for revenge.203 Even if we were certain that the perpetrator of a heinous crime would never offend again, nor set an example for anyone else, most people would feel that “justice must be done” and that he should incur some harm to balance the harm he has caused. The psychological impulse behind just deserts is thoroughly intelligible. As Daly and Wilson observe:From the perspective of evolutionary psychology, this almost mystical and seemingly irreducible sort of moral imperative is the output of a mental mechanism with a straightforward adaptive function: to reckon justice and administer punishment by a calculus which ensures that violators reap no advantage from their misdeeds. The enormous volume of mystico-religious bafflegab about atonement and penance and divine justice and the like is the attribution to higher, detached authority of what is actually a mundane, pragmatic matter: discouraging self-interested competitive acts by reducing their profitability to nil.204
But since it is an irreducible imperative, whose evolutionary rationale is invisible to us when we are in the throes of it, the justice that people mete out in practice may be only loosely related to its incentive structure.
The psychologists Kevin Carlsmith, John Darley, and Paul Robinson devised hypothetical cases designed to tease apart deterrence from just deserts.205 Just deserts is sensitive to the moral worth of the perpetrator’s motive. For instance, an embezzler who used his ill-gotten gains to support a lavish lifestyle would seem to deserve a harsher punishment than one who redirected them to the company’s underpaid workers in the developing world. Deterrence, in contrast, is sensitive to the incentive structure of the punishment regime. Assuming that malefactors reckon the utility of a misdeed as the probability they will get caught multiplied by the penalty they will incur if they do get caught, then a crime that is hard to detect should get a harsher punishment than one that is easy to detect. For similar reasons, a crime that gets a lot of publicity should be punished more harshly than one that is unpublicized, because the publicized one will leverage the value of the punishment as a general deterrent. When people are asked to mete out sentences to fictitious malefactors in these scenarios, their decisions are affected only by just deserts, not by deterrence. Evil motives draw harsher sentences, but difficult-to-detect or highly publicized infractions do not.
The reforms advocated by the utilitarian economist Cesare Beccaria during the Humanitarian Revolution, which led to the abolition of cruel punishments, were designed to reorient criminal justice away from the raw impulse to make a bad person suffer and toward the practical goal of deterrence. The Carlsmith experiment suggests that people today have not gone all the way into thinking of criminal justice in purely utilitarian terms. But in The Blank Slate I argued that even the elements of our judicial practices that seem to be motivated by just deserts may ultimately serve a deterrent function, because if a system ever became too narrowly utilitarian, malefactors would learn to game it. Just deserts can close off that option.206
Even the fairest system of criminal justice cannot monitor its citizens wherever they may be and around the clock. It has to count on them internalizing norms of fairness and damping their vengeance before it escalates. In chapter 3 we saw how the ranchers and farmers of Shasta County resolved their grievances without tattling to the police, thanks to reciprocity, gossip, occasional vandalism, and for minor harms, “lumping it.”207 Why do the people in some societies lump it while others experience a glow in their eyes, a flame in their cheeks, and a pounding in their temples? Norbert Elias’s theory of the Civilizing Process suggests that government-administered justice can have knock-on effects that lead its citizens to internalize norms of self-restraint and quash their impulses for retribution rather than act on them. We saw many examples in chapters 2 and 3 of how pacification by a government has a whopping effect on lethal vengeance, and in the next chapter we will review experiments showing that self-control in one context can spread into others.
Chapter 3 also introduced the finding that the sheer presence of government brings rates of violence down only so far—from the hundreds of homicides per 100,000 people per year to the tens. A further drop into the single digits may depend on something hazier, such as people’s acceptance of the legitimacy of the government and social contract. A recent experiment may have caught a wisp of this phenomenon in the lab. The economists Benedikt Herrmann, Christian Thöni, and Simon Gächter had university students in sixteen countries play Public Goods games (the game in which players contribute money to a pot which is then doubled and redistributed among them), with and without the possibility of punishing one another.208 The researchers discovered to their horror that in some countries many players punished generous contributors to the common good rather than stingy ones. These acts of spite had predictably terrible effects on the group’s welfare, because they only reinforced every player’s worst instinct to free-ride on the contributions of the others. The contributions soon petered out, and everyone lost. The antisocial punishers seem to have been motivated by an excess of revenge. When they themselves had been punished for a low contribution, rather than being chastened and increasing their contribution on the next round (which is what participants in the original studies conducted in the United States and Western Europe had done), they punished their punishers, who tended to be the altruistic contributors.
What distinguishes the countries in which the targets of punishment repent, such as the United States, Australia, China, and those of Western Europe, from those in which they spitefully retaliate, like Russia, Ukraine, Greece, Saudi Arabia, and Oman? The investigators ran a set of multiple regressions using a dozen traits of the different countries, taken from economic statistics and the results of international surveys. A major predictor of excess revenge turned out to be civic norms: a measure of the degree to which people think it is all right to cheat on their income taxes, claim government benefits to which they are not entitled, and dodge fare-collectors on the subway. (Social scientists believe that civic norms make up a large part of the social capital of a country, which is more important to its prosperity than its physical resources.) Where might the civic norms themselves have come from? The World Bank assigns countries a score called the Rule of Law, which reflects how well private contracts can be enforced in courts, whether the legal system is perceived as being fair, the importance of the black market and organized crime, the quality of the police, and the likelihood of crime a
nd violence. In the experiment, the Rule of Law of a country significantly predicted the degree to which its citizens indulged in antisocial revenge: the people in countries with an iffy Rule of Law were more destructively vengeful. With the usual spaghetti of variables, it’s impossible to be certain what caused what, but the results are consistent with the idea that the disinterested justice of a decent Leviathan induces citizens to curb their impulse for revenge before it spirals into a destructive cycle.
Revenge, for all its tendency to escalate, must come with a dimmer switch. If it didn’t, the Moralization Gap would inflate every affront into an escalating feud, like the experimental subjects who mashed down on each other’s fingers harder and harder with every round. Not only does revenge not always escalate, especially in civil societies with the rule of law, but we shouldn’t expect it to. The models of the evolution of cooperation showed that the most successful agents dial back their tit-for-tatting with contrition and forgiveness, especially when trapped in the same boat with other agents.
In Beyond Revenge: The Evolution of the Forgiveness Instinct, the psychologist Michael McCullough shows that we do have this dimmer switch for revenge.209 As we have seen, several species of primate can kiss and make up after a fight, at least if their interests are bound by kinship, shared goals, or common enemies. 210 McCullough shows that the human forgiveness instinct is activated under similar circumstances.