Sometimes, too, around the turn of the nineteenth century, individual anarchists would strike directly against world leaders or robber barons (as they were then called) with assassinations or bombings: in the period from roughly 1894 to 1901 there was a particularly intense spate, which led to the deaths of one French president, one Spanish prime minister, and U.S. president William McKinley, as well as attacks on at least a dozen other kings, princes, secret police chiefs, industrialists, and heads of state. This is the period that produced the notorious popular image of the anarchist bomb thrower, which has lingered in the popular imagination ever since. Anarchist thinkers like Peter Kropotkin and Emma Goldman often struggled with what to say about such attacks, which were often carried out by isolated individuals who were not actually part of any anarchist union or association. Still, it’s worthy of note that anarchists were perhaps the first modern political movement to (gradually) realize that, as a political strategy, terrorism, even when it is not directed at innocents, doesn’t work. For nearly a century now, in fact, anarchism has been one of the very few political philosophies whose exponents never blow anyone up (indeed, the twentieth-century political leader who drew most from the anarchist tradition was Mohandas K. Gandhi). Yet for the period of roughly 1914 to 1989, during which time the world was continually either fighting or preparing for world wars, anarchism went into something of an eclipse for precisely that reason: to seem “realistic” in such violent times a political movement had to be capable of organizing tank armies, aircraft carriers, and ballistic missile systems, and that was one thing at which Marxists could often excel, but everyone recognized that anarchists—rather to their credit, in my opinion—would never be able to pull off. It was only after 1989, when the age of great-war mobilizations seemed to have come to an end, that a global revolutionary movement based on anarchist principles—the Global Justice Movement—reappeared.

  There are endless varieties, colors, and tendencies of anarchism. For my own part, I like to call myself a “small-a” anarchist. I’m less interested in figuring out what sort of anarchist I am than in working in broad coalitions that operate in accord with anarchist principles: movements that are not trying to work through or become governments; movements uninterested in assuming the role of de facto government institutions like trade organizations or capitalist firms; groups that focus on making our relations with each other a model of the world we wish to create. In other words, people working toward truly free societies. After all, it’s hard to figure out exactly what kind of anarchism makes the most sense when so many questions can only be answered further down the road. Would there be a role for markets in a truly free society? How could we know? I myself am confident, based on history,25 that even if we did try to maintain a market economy in such a free society—that is, one in which there would be no state to enforce contracts, so that agreements came to be based only on trust—economic relations would rapidly morph into something libertarians would find completely unrecognizable, and would soon not resemble anything we are used to thinking of as a “market” at all. I certainly can’t imagine anyone agreeing to work for wages if they have any other options. But who knows, maybe I’m wrong. I am less interested in working out what the detailed architecture of what a free society would be like than in creating the conditions that would enable us to find out.

  We have little idea what sort of organizations, or for that matter, technologies, would emerge if free people were unfettered to use their imagination to actually solve collective problems rather than to make them worse. But the primary question is: how do we even get there? What would it take to allow our political and economic systems to become a mode of collective problem solving rather than, as they are now, a mode of collective war?

  Even anarchists have taken a very long time to come around to grappling with the full extent of this problem. When anarchism was part of the broader workers’ movement, for example, it tended to accept that “democracy” meant majority voting and Robert’s Rules of Order, relying on appeals to solidarity to convince the minority to go along. Appeals to solidarity can be very effective when one is locked in life-or-death conflict of one sort or another, as revolutionaries usually were. The CNT, the anarchist labor union in Spain of the 1920s and 1930s, relied on a principle that when a workplace voted to strike, no member who had voted against striking was bound by the decision; the result was, almost invariably, 100 percent compliance. But again, strikes were quasi-military operations. Local rural communes tended to fall back, as rural communities everywhere do, on some sort of de facto consensus.

  In the United States, on the other hand, consensus, rather than majority voting, has often been used by grassroots organizers who were not, explicitly, anarchists: SNCC, the Student Nonviolent Coordinating Committee, which was the horizontal branch of the civil rights movement, operated by consensus, and SDS, Students for a Democratic Society, claimed in their constitutional principles to operate by parliamentary procedure, but in fact tended to rely on consensus in practice. Most of those who participated in such meetings felt the process used at the time was crude, improvised, and often extremely frustrating. Part of it was just because Americans, for all their democratic spirit, mostly had absolutely no experience of democratic deliberation. There’s a famous story from the civil rights movement of a small group of activists trying to come to a collective decision in an emergency situation, unable to attain consensus. At one point, one of them gave up and pulled out a gun and aimed it directly at the facilitator. “Either make a decision for us,” he said, “or I’ll shoot you.” The facilitator replied, “Well I guess you’ll just have to shoot me then.” It took a very long time to develop what might be called a culture of democracy, and when it did emerge, it came from surprising directions: spiritual traditions, Quakerism, for instance, and feminism.

  The American Society of Friends, the Quakers, for instance, had spent centuries developing their own form of consensus decision making as a spiritual exercise. Quakers had also been active in most grassroots American social movements from Abolitionism onward, but until the 1970s they were not, for the most part, willing to teach others their techniques for the precise reason that they considered it a spiritual matter, a part of their religion. “You rely on consensus,” George Lakey, a famous Quaker pacifist activist once explained, “when you have a shared understanding of the theology. It is not to be imposed on people. Quakers, at least in the ’50s, were anti-proselytizing.”26 It was really only a crisis in the feminist movement—which started using informal consensus in small consciousness-raising groups of usually around a dozen people, but found themselves running into all sorts of problems with cliques and tacit leadership structures when those became larger in size—that eventually inspired some dissident Quakers (the most famous was Lakey himself) to pitch in and begin disseminating some of their techniques. These techniques, in turn, now infused with a specifically feminist ethos, came to be modified when adopted for larger and more diverse groups.27

  This is just one example of how what has now come to be called “Anarchist Process”—all those elaborate techniques of facilitation and consensus finding, the hand signals and the like—emerged from radical feminism, Quakerism, and even Native American traditions. In fact, the particular variety employed in North America should really be called “feminist process” rather than “anarchist process.” These methods became identified with anarchism precisely because anarchists recognized them to be forms that could be employed in a free society, in which no one could be physically coerced to go along with a decision they found profoundly objectionable.a

  Consensus is not just a set of techniques. When we talk about process, what we’re really talking about is the gradual creation of a culture of democracy. This brings us back to rethinking some of our most basic assumptions about what democracy is even about.

  If we return to the writings of men like Adams and Madison or even Jefferson in this light, it’s easy to see that, elitist though they were, some o
f their criticisms of democracy deserve to be taken seriously. First of all, they argued that instituting a system of majoritarian direct democracy among white adult males in a society deeply divided by inequalities of wealth would likely lead to tumultuous, unstable, and ultimately bloody results, to the rise of demagogues and tyrants. Here they were probably right.

  Another argument they made is that only established men of property should be allowed to vote and hold office because only they were sufficiently independent and therefore free of self-interest that they could afford to think about the common good. This latter is an important argument and deserves more attention than it has usually been given.

  Obviously, the way it was framed was nothing if not elitist. The profound hypocrisy of arguing that the common people lacked education or rationality come through clearly in the writings of men like Gouverneur Morris, who was willing to admit, at least in a private letter to a fellow member of the gentry, that it was the opposite idea—that ordinary people had acquired education and were capable of framing rational arguments—that terrified him most of all.

  But the real problem with arguments based on the presumed “irrationality” of the common people was in the underlying assumptions about what constituted “rationality.” One common argument against popular rule in the early republic was that the “eight or nine millions who have no property” as Adams put it, were incapable of rational judgment because they were unused to managing their own affairs. Servants and wage laborers, let alone women and slaves, were accustomed to taking orders. Some among the elites held this to be because they were capable of nothing else; some simply saw it as the outcome of their habitual circumstances. But almost all agreed that if such people were given the vote, they would not think about what was best for the country but immediately attach themselves to some leader—either because that leader bought them off in some way (promised to abolish their debts, or even directly paid them), or just because following others is all they knew how to do. An excess of liberty, therefore, would only lead to tyranny as the people threw themselves to the mercies of charismatic leaders. At best, it would result in “factionalism,” a political system dominated by political parties—almost all the framers were strongly opposed to the emergence of a party system—battling over their respective interests. Here they were right: while major class warfare didn’t ensue—partly because of the existence of the escape hatch of the frontier—factionalism and political parties immediately followed once an even modestly expanded franchise began to be put into place in the 1820s and 1830s. The fears of the elites were not entirely misplaced.

  The notion that only men with property can be fully rational, and that others exist primarily to follow orders, traces back at least to Athens. Aristotle states the matter quite explicitly in the beginning of his Politics, where he argues that only free adult males can be fully rational beings, in control of their own bodies, just as they are in control over others: their women, children, and slaves. Here then is the real flaw in the whole tradition of “rationality” that the Founders inherited. It’s not ultimately about self-sufficiency, being disinterested. To be rational in this tradition has everything to do with the ability to issue commands: to stand apart from a situation, assess it from a distance, make the appropriate set of calculations, and then tell others what to do.28 Essentially, it is the kind of calculation one can make only when one can tell others to shut up and do as they are told, not work with them as free equals in search of solutions. It’s only the habit of command that allows one to imagine that the world can be reduced to the equivalent of mathematical formulae, formulae that can be applied to any situation, regardless of its real human complexities.

  This is why any philosophy that begins by proposing that humans are, or should be, rational—as cold and calculating as a lord—invariably ends up concluding that, really, we’re the opposite: that reason, as Hume so famously put it, is always, and can only be, the “slave of the passions.” We seek pleasure; therefore we seek property, to guarantee our access to pleasure; therefore, we seek power, to guarantee our access to property. In every case there’s no natural end to it; we’ll always seek more and more and more. This theory of human nature is already present in the ancient philosophers (and is their explanation why democracy can only be disastrous), and recurs in the Christian tradition of Saint Augustine in the guise of original sin, and in the atheist Thomas Hobbes’s theory of why a state of nature could only have been a violent “war of all against all,” and again, of course, of why democracy must necessarily be disastrous. The creators of the eighteenth-century republican constitutions shared these assumptions as well. Humans were really incorrigible. So for all the occasional high-minded language, most of these philosophers were ultimately willing to admit that the only real choice was between utterly blind passions and the rational calculation of the interests of an elite class; the ideal constitution, therefore, was one designed to ensure that such interests checked each other and ultimately balanced off.

  This has some curious implications. On the one hand, it is universally held that democracy means little without free speech, a free press, and the means for open political deliberation and debate. At the same time, most theorists of liberal democracy—from Jean-Jacques Rousseau to John Rawls—grant that sphere of deliberation an incredibly limited purview, since they assume a set of political actors (politicians, voters, interest groups) who already know what they want before they show up in the political arena. Rather than using the political sphere to decide how to balance competing values, or make up their minds about the best course of action, such political actors, if they think about anything, consider only how best to pursue their already existing interests.29

  So this leaves us with a democracy of the “rational,” where we define rationality as detached mathematical calculation born of the power to issue commands, the kind of “rationality” that will inevitably produce monsters. As the basis for a true democratic system, these terms are clearly disastrous. But what is the alternative? How to found a theory of democracy on the kind of reasoning that goes on, instead, between equals?

  One reason this has been difficult to do is that this sort of reasoning is actually more complex and sophisticated than simple mathematical calculation, and therefore doesn’t lend itself to the quantifiable models beloved of political scientists and those who assess grant applications. After all, when one asks if a person is being rational, we aren’t asking very much: really, just whether they are capable of making basic logical connections. The matter rarely comes up unless one suspects someone might actually be crazy or perhaps so blinded by passion that their arguments make no sense. Consider, in contrast, what’s entailed when one asks if someone is being “reasonable.” The standard here is much higher. Reasonableness implies a much more sophisticated ability to achieve a balance between different perspectives, values, and imperatives, none of which, usually, could possibly be reduced to mathematical formulae. It means coming up with a compromise between positions that are, according to formal logic, incommensurable, just as there’s no formal way, when deciding what to cook for dinner, to measure the contrasting advantages of ease of preparation, healthiness, and taste. But of course we make such decisions all the time. Most of life—particularly life with others—consists of making reasonable compromises that could never be reduced to mathematical models.

  Another way to put this is that political theorists tend to assume actors who are operating on the intellectual level of an eight-year-old. Developmental psychologists have observed that children begin to make logical arguments not to solve problems, but when coming up with reasons for what they already want to think. Anyone who deals with small children on a regular basis will immediately recognize that this is true. The ability to compare and coordinate contrasting perspectives on the other hand comes later and is the very essence of mature intelligence. It’s also precisely what those used to the power of command rarely have to do.

  The philosopher Stephen Toul
min, already famous for his models of moral reasoning, made something of an intellectual splash in the 1990s when he tried to develop a similar contrast between rationality and reasonableness: though he started his analysis on the basis for rationality as deriving not from the power of command, but from the need for absolute certainty. Contrasting the generous spirit of an essayist like Montaigne, who wrote in the expansive Europe of the sixteenth century and assumed that truth is always situational, with the well-nigh paranoid rigor of René Descartes, who wrote a century later when Europe had collapsed into bloody wars of religion and who conceived a vision of society as based on purely “rational” grounds, Toulmin proposed that all subsequent political thought has been bedeviled by attempts to apply impossible standards of abstract rationality to concrete human realities. But Toulmin wasn’t the first to propose the distinction. I myself first encountered it in a rather whimsical essay published in 1960 by the British poet Robert Graves called “The Case for Xanthippe.”