In retrospect, though, I think that later historians will conclude that the legacy of the 1960s revolution was deeper than we now imagine, and the triumph of capitalist markets and their various planetary administrators and enforcers, which seemed so epochal and permanent in the wake of the collapse of the Soviet Union in 1991, was, in fact, far shallower.
I’ll take an obvious example. One often hears that a decade of antiwar protests in the late 1960s and early 1970s were ultimately failures, since they did not appreciably speed up the U.S. withdrawal from Indochina. But afterward, those controlling U.S. foreign policy were so anxious about being met with similar popular unrest—and even more, with unrest within the military itself, which was genuinely falling apart by the early 1970s—that they refused to commit U.S. forces to any major ground conflict for almost thirty years. It took 9/11, an attack that led to thousands of civilian deaths on U.S. soil, to fully overcome the notorious “Vietnam syndrome”—and even then, those planning the wars placed an almost obsessive effort in making the wars effectively protest-proof. Propaganda was incessant, the media carefully brought on board, experts provided exact calculations on body bag count (how many U.S. casualties it would take to stir mass opposition), and the rules of engagement were carefully written to keep the count below that.
The problem was that since those rules of engagement ensured that thousands of women, children, and old people would end up “collateral damage” in order to minimize deaths and injuries to U.S. soldiers, this meant that in Iraq and Afghanistan intense hatred for the occupying forces would pretty much guarantee that the United States couldn’t obtain its military objectives. And remarkably, the war planners seemed to be aware of this. It didn’t matter. They considered it far more important to prevent effective opposition at home than to actually win the war. It’s as if American forces in Iraq were ultimately defeated by the ghost of Abbie Hoffman.
Clearly, an antiwar movement in the 1960s that is still tying the hands of U.S. military planners in 2012 can hardly be considered a failure. But it raises an intriguing question: what happens when the creation of that sense of failure, of the complete ineffectiveness of political action against the system, becomes the chief objective of those in power?
The thought first occurred to me when participating in the IMF actions in Washington, D.C., in 2002. Coming shortly on the heels of 9/11, we were relatively few and ineffective, the number of police overwhelming; there was no sense that we could actually succeed in shutting down the meetings. Most of us left feeling vaguely depressed. It was only a few days later, when I talked to someone who had friends attending the meetings, that I learned we had in fact shut down the meetings: the police had introduced such stringent security measures, canceling half the events, that most of the actual meetings had been carried out online. In other words, the government had decided it was more important for protesters to walk away feeling like failures than for the IMF meetings to actually take place. If you think about it, they are affording protesters extraordinary importance.
Is it possible that this preemptive attitude toward social movement, the designing of wars and trade summits in such a way that preventing effective opposition is considered more of a priority than the success of the war or summit itself, really reflects a more general principle? What if those currently running the system, most of whom themselves witnessed the unrest of the 1960s firsthand as impressionable youngsters, are—consciously or unconsciously (and I suspect it’s more conscious than not)—obsessed by the prospect of revolutionary social movements once again challenging prevailing common sense?
It would explain a lot. In most of the world, the last thirty years has come to be known as the age of neoliberalism—one dominated by a revival of the long since abandoned nineteenth-century creed that held that free markets and human freedom in general were ultimately the same thing. Neoliberalism has always been wracked by a central paradox. It declares that economic imperatives are to take priority over all others. Politics itself is just a matter of creating the conditions for “growing the economy” by allowing the magic of the marketplace to do its work. All other hopes and dreams—of equality, of security—are to be sacrificed for the primary goal of economic productivity. But actually, global economic performance over the last thirty years has been decidedly mediocre. With one or two spectacular exceptions (notably China, which significantly ignored most neoliberal prescriptions), growth rates have been far below what they were in the days of the old-fashioned, state-directed, welfare-state-oriented capitalism of the 1950s, 1960s, and even 1970s.1 By its own standards, then, the project was already a colossal failure even before the 2008 collapse.
If, on the other hand, we stop taking world leaders at their word and instead think of neoliberalism as a political project, it suddenly looks spectacularly effective. The politicians, CEOs, trade bureaucrats, and so forth who regularly meet at summits like Davos or the G20 may have done a miserable job in creating a world capitalist economy that actually meets the needs of a majority of the world’s inhabitants (let alone produces hope, happiness, security, or meaning), but they have succeeded magnificently in convincing the world that capitalism—and not just capitalism, the exact financialized, semifeudal capitalism we happen to have right now—is the only viable economic system. If you think about it this is a remarkable accomplishment.
How did they pull it off? The preemptive attitude toward social movements is clearly a part of it; under no conditions can alternatives, or anyone proposing alternatives, be seen to experience success. This helps explain the almost unimaginable investment in “security systems” of one sort or another: the fact that the United States, which lacks any major rival, spends more on its military and intelligence than it did during the Cold War, along with the almost dazzling accumulation of private security agencies, intelligence agencies, militarized police, guards, and mercenaries. Then there are the propaganda organs, including a massive media industry that did not even exist before the 1960s, celebrating police. Mostly these systems do not so much attack dissidents directly as contribute to a pervasive climate of fear, jingoistic conformity, life insecurity, and simple despair that renders any thought of changing the world seem an idle fantasy. Yet these security systems are also extremely expensive. Some economists estimate that a quarter of the American population is now engaged in “guard labor” of one sort or another—defending property, supervising work, or otherwise keeping their fellow Americans in line.2 Economically, most of this disciplinary apparatus is pure deadweight.
In fact, most of the economic innovations of the last thirty years make more sense politically than economically. Eliminating guaranteed life employment for precarious contracts doesn’t really create a more effective workforce, but it is extraordinarily effective in destroying unions and otherwise depoliticizing labor. The same can be said of endlessly increasing working hours. No one has much time for political activity if they’re working sixty-hour weeks. It does often seem that, whenever there is a choice between one option that makes capitalism seem the only possible economic system, and another that would actually make capitalism a more viable economic system, neoliberalism means always choosing the former. The combined result is a relentless campaign against the human imagination. Or, to be more precise: the imagination, desire, individual liberation, all those things that were to be liberated in the last great world revolution, were to be contained strictly in the domain of consumerism, or perhaps in the virtual realities of the Internet. In all other realms they were to be strictly banished. We are talking about the murdering of dreams, the imposition of an apparatus of hopelessness, designed to squelch any sense of an alternative future. Yet as a result of putting virtually all their efforts in the political basket, we are left in the bizarre situation of watching the capitalist system crumbling before our very eyes, at just the moment everyone had finally concluded no other system would be possible.
Perhaps this is all we could expect in a world where, as I pointed out in Chapter
2, the ruling class on both sides of the ostensible political divide had come to believe there was no reality outside of what could be created by their own power. Bubble economies are one result of the same political program that has not only made bribery the sovereign principle running our political system, but for those operating within it the very principle of reality itself. It’s as if the strategy has consumed everything.
But this means any revolution on the level of common sense would have devastating effects on those presently in power. Our rulers have gambled everything on making such an outburst of imagination inconceivable. Were they to lose that bet, the effects would be (for them) ruinous.
Normally, when one challenges the conventional wisdom—that the current economic and political system is the only possible one—the first reaction you are likely to get is a demand for a detailed architectural blueprint of how an alternative system would work, down to the nature of its financial instruments, energy supplies, and policies of sewer maintenance. Next, one is likely to be asked for a detailed program of how this system will be brought into existence. Historically, this is ridiculous. When has social change ever happened according to someone’s blueprint? It’s not as if a small circle of visionaries in Renaissance Florence conceived of something they called “capitalism,” figured out the details of how the stock exchange and factories would someday work, and then put in place a program to bring their visions into reality. In fact, the idea is so absurd we might well ask ourselves how it ever occurred to us to imagine this is how change happens to begin.
My suspicion is that it’s really a hangover from Enlightenment ideas that have long since faded out virtually everywhere except America. It was popular in the eighteenth century to imagine that nations were founded by great lawgivers (Lycurgus, Solon …) who invented their customs and institutions from whole cloth, much like God was imagined to have created the world, and then (again, like God) stepped away to let the machine essentially run itself. The “spirit of the laws” would thus gradually come to determine the character of the nation. It was a peculiar fantasy, but the authors of the U.S. Constitution believed that was how great nations were founded, and actually attempted to put it into practice. Hence the United States, “a nation of laws and not of men,” is perhaps the only one on earth of which this picture is in any sense true. But even in the United States, as we’ve seen, this is only a very small part of what happened. And later attempts to create new nations and institute political or economic systems from above (the United States’ great twentieth-century rival, the USSR, the only other great nation on earth that was primarily an acronym, is the most frequently cited example here) did not work out particularly well.
All this is not to say there’s anything wrong with utopian visions. Or even blueprints. They just need to be kept in their place. The theorist Michael Albert has worked out a detailed plan for how a modern economy could run without money on a democratic, participatory basis. I think this is an important achievement—not because I think that exact model could ever be instituted, in exactly the form in which he describes it, but because it makes it impossible to say that such a thing is inconceivable. Still, such models can only be thought experiments. We cannot really conceive the problems that will arise when we start actually trying to build a free society. What now seem likely to be the thorniest problems might not be problems at all; others that never even occurred to us might prove devilishly difficult. There are innumerable X-factors. The most obvious is technology. This is the reason it’s so absurd to imagine activists in Renaissance Italy coming up with a model for a stock exchange and factories—what happened was based on all sorts of technologies that they couldn’t have anticipated, but which in part only emerged because society began to move in the direction that it did. This might explain, for instance, why so many of the more compelling visions of an anarchist society have been produced by science fiction writers (Ursula K. Le Guin, Starhawk, Kim Stanley Robinson). In fiction, you are at least admitting the technological aspect is guesswork.
Myself, I am less interested in deciding what sort of economic system we should have in a free society than in creating the means by which people can make such decisions for themselves. This is why I spent so much of this book talking about democratic decision making. And the very experience of taking part in such new forms of decision making encourages one to look on the world with new eyes.
What might a revolution in common sense actually look like? I don’t know, but I can think of any number of pieces of conventional wisdom that surely need challenging if we are to create any sort of viable free society. I’ve already explored one—the nature of money and debt—in some detail in a previous book. I even suggested a debt jubilee, a general cancellation, in part just to bring home that money is really just a human product, a set of promises, that by its nature can always be renegotiated. Here, I’ll list four others:
WORK 1: THE PRODUCTIVIST BARGAIN
A lot of the pernicious assumptions that cripple our sense of political possibility have to with the nature of work.
The most obvious is the assumption that work is necessarily good, that those unwilling to submit to work discipline are inherently undeserving and immoral, and that the solution to any economic crisis or even economic problem is always that people should work more, or work harder, than they already do. This is one of those assumptions that everyone in mainstream political discourse seems obliged to accept as the ground of conversation. But the moment you think about it, it’s absurd. First of all, it’s a moral position, not an economic one. There is plenty of work being done we’d all probably be better off without, and workaholics are not necessarily better human beings. In fact, I think any levelheaded assessment of the world situation would have to conclude that what’s really needed is not more work, but less. And this is true even if we don’t take into account ecological concerns—that is, the fact that the current pace of the global work machine is rapidly rendering the planet uninhabitable.
Why is the idea so difficult to challenge? I suspect part of the reason is the history of workers’ movements. It is one of the great ironies of the twentieth century that, whenever a politically mobilized working class did win a modicum of political power, it always did so under the leadership of cadres of bureaucrats dedicated to just this sort of productivist ethos—one that most actual workers did not share.* One might even call it “the productivist bargain,” that if one accepts the old Puritan ideal that work is a virtue in itself, you shall be rewarded with consumer paradise. In the early decades of the century, this was the chief distinction between anarchist and socialist unions, which was why the former always tended to demand higher wages, the latter, less hours (the anarchist unions, most famously, are really responsible for the eight-hour workday). The socialists embraced the consumer paradise offered by their bourgeois enemies; yet they wished to manage the productive system themselves; anarchists, in contrast, wanted time in which to live, to pursue forms of value of which the capitalists could not even dream. Yet where did the revolutions happen? It was the anarchist constituencies—those who rejected the productivist bargain—that actually rose up: whether in Spain, Russia, China, or almost anywhere a revolution really took place. Yet in every case they ended up under the administration of socialist bureaucrats who embraced that dream of a consumer utopia, even though it was also about the last thing they were ever able to provide. The irony became that the principal social benefit the Soviet Union and similar regimes actually provided—more time, since work discipline becomes a completely different thing when one effectively cannot be fired from one’s job, and everyone was able to get away with working about half the hours they were supposed to—was precisely the one they couldn’t acknowledge; it had to be referred to as “the problem of absenteeism” standing in the way of an impossible future full of shoes and consumer electronics. But here, trade unionists, too, feel obliged to adopt bourgeois terms—in which productivity and labor discipline are absolute values—and act as
if the freedom to lounge about on a construction site is not a hard-won right but actually a problem. Granted, it would be much better to simply work four hours a day than do four hours’ worth of work in eight, but surely this is better than nothing.
WORK 2: WHAT IS LABOR?
Submitting oneself to labor discipline—supervision, control, even the self-control of the ambitious self-employed—does not make one a better person. In most really important ways it probably makes one worse. To undergo it is a misfortune that at best is sometimes necessary. Yet it’s only when we reject the idea that such labor is virtuous in itself that we can start to ask what actually is virtuous about labor. To which the answer is obvious. Labor is virtuous if it helps others. An abandonment of productivism should make it easier to reimagine the very nature of what work is, since, among other things, it will mean that technological development will be redirected less toward creating ever more consumer products and ever more disciplined labor, and more toward eliminating those forms of labor entirely.
What would remain is the kind of work only human beings will ever be able to do: those forms of caring and helping labor that are, I’ve argued, at the very center of the crisis that brought about Occupy Wall Street to begin with. What would happen if we stopped acting as if the primordial form of work is laboring at a production line, or wheat field, or iron foundry, or even in an office cubicle, and instead started from a mother, a teacher, or caregiver? We might be forced to conclude that the real business of human life is not contributing toward something called “the economy” (a concept that didn’t even exist three hundred years ago), but the fact that we are all, and have always been, projects of mutual creation.