• • •
Very few economists have actually attempted to measure the overall social value of different professions; most would probably take the very idea as something of a fool’s errand; but those who have tried tend to confirm that there is indeed an inverse relation between usefulness and pay. In a 2017 paper, US economists Benjamin B. Lockwood, Charles G. Nathanson, and E. Glen Weyl combed through the existing literature on the “externalities” (social costs) and “spillover effects” (social benefits) associated with a variety of highly paid professions, to see if it were possible to calculate how much each adds to or subtracts from the economy overall. They concluded that while in some cases—notably anything associated with creative industries—the values involved were just too subjective to measure, in other cases, a rough approximation was possible. Their conclusion: the most socially valuable workers whose contributions could be calculated are medical researchers, who add $9 of overall value to society for every $1 they are paid. The least valuable were those who worked in the financial sector, who, on average, subtract a net $1.80 in value from society for every $1 of compensation. (And, of course, workers in the financial sector are often compensated extremely well.)
Here was their overall breakdown:13
• researchers +9
• schoolteachers +1
• engineers +.2
• consultants and IT professionals 0
• lawyers –.2
• advertisers and marketing professionals –.3
• managers –.8
• financial sector –1.5
This would certainly seem to confirm a lot of people’s gut suspicions about the overall value of such professions, so it’s nice to see it spelled out, but the authors’ focus on the most highly paid professionals makes it of limited use for present purposes. Schoolteachers are probably the lowest-paid workers on the list, at least on average, and many researchers get by on very little, so the results certainly don’t contradict a negative relation between pay and usefulness; but to get a real sense of the full gamut of employment, one needs a broader sample.
The closest I know to such a study that does use such a broader sample was one carried out by the New Economic Foundation in the United Kingdom, whose authors applied a method called “Social Return on Investment Analysis” to examine six representative occupations, three high-income, three low. Here’s a summary of the results:
• city banker – yearly salary c. £5 million – estimated £7 of social value destroyed for every £1 earned;
• advertising executive – yearly salary c. £500,000, estimated £11.50 of social value destroyed per £1 paid;
• tax accountant – yearly salary c. £125,000, estimated £11.20 of social value destroyed per £1 paid;
• hospital cleaner – yearly income c. £13,000 (£6.26 per hour), estimated £10 of social value generated per £1 paid;
• recycling worker – yearly income c. £12,500 (£6.10 per hour) – estimated £12 in social value generated per £1 paid;
• nursery worker – salary c. £11,500 – estimated £7 in social value generated per £1 paid.14
The authors admit that many of their calculations are somewhat subjective, as all such calculations must be, and the study focuses only on the top and bottom of the income scale. As a result, it leaves out the majority of jobs discussed in this book, which are mostly midrange in pay, and in most cases, at least, the social benefit is neither positive nor negative but seems to hover around zero. Still, as far as it goes, it strongly confirms the general principle that the more one’s work benefits others, the less one tends to be paid for it.
There are exceptions to this principle. Doctors are the most obvious. Physicians’ salaries tend to the upper end of the scale, especially in America, yet they do seem to play an indisputably beneficial role. Yet even here, there are health professionals who would argue they’re not as much exceptions as they might seem—such as the pharmacist cited a few pages back, who was convinced most doctors contribute very little to human health or happiness but are mainly just dispensers of placebos. This may or may not be the case; frankly, I don’t have the competence to say; but if nothing else, the oft-cited fact that the overwhelming majority of improvement in longevity since 1900 is really due to hygiene, nutrition, and other public health improvements and not to improvements in medical treatment,15 suggests a case could be made that the (very poorly paid) nurses and cleaners employed in a hospital are actually more responsible for positive health outcomes than the hospital’s (very highly paid) physicians.
There are a smattering of other exceptions. Many plumbers and electricians, for instance, do quite well despite their usefulness; some low-paid work is fairly pointless—but in large measure, the rule does seem to hold true.16
The reasons for this inverse relation between social benefit and level of compensation, however, are quite another matter. None of the obvious answers seem to work. For instance: education levels are very important in determining salary levels, but if this were simply a matter of training and education, the American higher education system would hardly be in the state that it is, with thousands of exquisitely trained PhDs subsisting on adjunct teaching jobs that leave them well below the poverty line—even dependent on food stamps.17 On the other hand, if we were simply talking about supply and demand, it would be impossible to understand why American nurses are paid so much less than corporate lawyers, despite the fact that the United States is currently experiencing an acute shortage of trained nurses and a glut of law school graduates.18
Whatever the reasons—and myself, I believe that class power and class loyalty have a great deal to do with it—what is perhaps most disturbing about the situation is the fact that so many people not only acknowledge the inverse relation but also feel this is how things ought to be. That virtue, as the ancient Stoics used to argue, should be its own reward.
Arguments like this have long been made about teachers. It’s commonplace to hear that grade school or middle school teachers shouldn’t be paid well, or certainly not as well as lawyers or executives, because one wouldn’t want people motivated primarily by greed to be teaching children. The argument would make a certain amount of sense if it were applied consistently—but it never is. (I have yet to hear anyone make the same argument about doctors.)
One might even say that the notion that those who benefit society should not be paid too well is a perversion of egalitarianism.
Let me explain what I mean by this. The moral philosopher G. A. Cohen argued that a case could be made for equality of income for all members of society, based on the following logic (or, at least, this is my own bastardized summary): Why, he begins, might one pay certain people more than others? Normally, the justification is that some produce more or benefit society more than others. But then we must ask why they do so:
1. If some people are more talented than others (for example, have a beautiful singing voice, are a comic genius or a math whiz), we say they are “gifted.” If someone has already received a benefit (a “gift”), then it makes no sense to give them an additional benefit (more money) for that reason.
2. If some people work harder than others, it is usually impossible to establish the degree to which this is because they have a greater capacity for work (a gift again), and the degree to which it is because they choose to work harder. In the former case, it would again make no sense to reward them further for having an innate advantage over others.
3. Even if it could be proved that some work harder than others purely out of choice, one would then have to establish whether they did so out of altruistic motives—that is, they produced more because they wished to benefit society—or out of selfish motives, because they sought a larger proportion for themselves.
4. In the former case, if they produced more because they were striving to increase social wealth, then giving them a disproportionate share of that wealth would contradict their purpose. It would only make moral sense to r
eward those driven by selfish motives.
5. Since human motives are generally shifting and confused, one cannot simply divide the workforce into egoists and altruists. One is left with the choice of either rewarding everyone who makes greater efforts, or not doing so. Either option means that some people’s intentions will be frustrated. Altruists will be frustrated in their attempts to benefit society, while egoists will be frustrated in their attempts to benefit themselves. If one is forced to choose one or the other, it makes better moral sense to frustrate the egoists.
6. Therefore, people should not be paid more or otherwise rewarded for greater effort or productivity at work.19
The logic is impeccable. Many of the underlying assumptions could no doubt be challenged on a variety of grounds, but in this chapter, I’m not so much interested in whether there is, in fact, a moral case for equal distribution of income, as much as observing that in many ways, our society seems to have embraced in points 3 and 4—just without 1, 2, 5, or 6. Critically, it rejects the premise that it is impossible to sort workers by motives. One need only look at what sorts of careers a worker has chosen. Is there any reason a person might be doing this job other than the money? If so, then that person should be treated as if point 4 applies.
As a result, there is a sense that those who choose to benefit society, and especially those who have the gratification of knowing they benefit society, really have no business also expecting middle-class salaries, paid vacations, and generous retirement packages. By the same token, there is also a feeling that those who have to suffer from the knowledge they are doing pointless or even harmful work just for the sake of the money ought to be rewarded with more money for exactly that reason.
One sees this on the political level all the time. In the UK, for instance, eight years of “austerity” have seen effective pay cuts to almost all government workers who provide immediate and obvious benefits to the public: nurses, bus drivers, firefighters, railroad information booth workers, emergency medical personnel. It has come to the point where there are full-time nurses who are dependent on charity food banks. Yet creating this situation became such a point of pride for the party in power that Parliamentarians were known to give out collective cheers on voting down bills proposing to give nurses or police a raise. The same party took a notoriously indulgent view of the sharply rising compensation of those City bankers who had very nearly crashed the world economy a few years before. Yet that government remained highly popular. There is a sense, it would seem, that an ethos of collective sacrifice for the common good should fall disproportionately on those who are already, by their choice of work, engaged in sacrifice for the common good. Or who simply have the gratification of knowing their work is productive and useful.
This can make sense only if one first assumes that work—more specifically, paid work—is a value in itself; indeed, so much a value in itself that either the motives of the person taking the job, or the effects of the work, are at best secondary considerations. The flip side of the left-wing protest marchers waving signs demanding “More Jobs” is the right-wing onlooker muttering “Get a job!” as they pass by. There seems a broad consensus not so much even that work is good but that not working is very bad; that anyone who is not slaving away harder than he’d like at something he doesn’t especially enjoy is a bad person, a scrounger, a skiver, a contemptible parasite unworthy of sympathy or public relief. This feeling is echoed as much in the liberal politician’s protest against the sufferings of “hardworking people” (what about those who work with only moderate intensity?) as it is in conservative protests about skivers and “welfare queens.” Even more strikingly, the same values are now applied at the top. No longer do we hear much about the idle rich—this is not because they don’t exist, but because their idleness is no longer celebrated. During the Great Depression of the 1930s, impoverished audiences liked to watch high society movies about the romantic escapades of playboy millionaires. Nowadays they are more likely to be regaled with stories of heroic CEOs and their dawn-to-midnight workaholic schedules.20 In England, newspapers and magazines even write similar things about the royal family, who, we now learn, spend so many hours a week preparing for and executing their ritual functions that they barely have time to have a private life at all.
Many testimonies remarked on this work-as-an-end-in-itself morality. Clement had what he described as “a BS job evaluating grants at a public university in the Midwest.” During his off-hours, which was most of them, he spent a lot of time on the Web familiarizing himself with alternative political perspectives and eventually came to realize much of the money flowing through his office was intimately tied to the US war efforts in Iraq and Afghanistan. He quit, and, to the surprise and consternation of his coworkers, took a significantly lower-paying job with the local municipality. There, he said, the work is harder, but “at least some of it is interesting and helpful to humans.”
One of the things that puzzled Clement was the way that everyone at his old job felt they had to pretend to one another they were overwhelmed by their responsibilities, despite the obvious fact that they had very little to do:
Clement: My colleagues often discussed how busy things would get and how hard they work, even though they would routinely be gone at two or three in the afternoon. What is the name for this kind of public denial of the crystal-clear reality?
My mind keeps going back to the pressure to value ourselves and others on the basis of how hard we work at something we’d rather not be doing. I believe this attitude exists in the air around us. We sniff it into our noses and exhale it as a social reflex in small-talk; it is one of the guiding principles of social relations here: if you’re not destroying your mind and body via paid work, you’re not living right. Are we to believe that we are sacrificing for our kids, or something, who we don’t get to see because we’re at work all fucking day!?
Clement felt this kind of pressure was especially acute in what he described as the German-Protestant-inflected culture of the American Midwest. Others spoke of Puritanism, but the feeling does not appear to be limited to Protestant or North Atlantic environments. It exists everywhere; the differences are more a matter of varying degrees and intensities. And if the value of work is in part the fact that it’s “something we’d rather not be doing,” it stands to reason that anything we would wish to be doing is less like work and more like play, or a hobby, or something we might consider doing in our spare time, and therefore less deserving of material reward. Probably we shouldn’t be paid for it at all.
This certainly resonates with my own experience. Most academics are first drawn to their careers because they love knowledge and are excited by ideas. After all, pretty much anyone capable of spending seven years earning a PhD knows that she could just as easily have spent three years in law school and come out with a starting salary many times higher. Yet despite that, when two academics in the same department hobnob over coffee, a love of knowledge or excitement about ideas is likely to be the last thing they express. Instead, they will almost invariably complain about how overwhelmed they are with administrative responsibilities. True, this is partly because academics actually are expected to spend less and less of their time reading and writing, and more and more time dealing with administrative problems,21 but even if one is pursuing some exciting new intellectual discovery, it would be seen as inconsiderate to act as if one was enjoying one’s work when others clearly aren’t. Some academic environments are more anti-intellectual than others. But everywhere, at the very least, there is a sense that the pleasurable aspects of one’s calling, such as thinking, were not really what one is being paid for; they were better seen as occasional indulgences one is granted in recognition of one’s real work, which is largely about filling out forms.
Academics aren’t paid for writing or reviewing research articles, but at least the universities that do pay them acknowledge, however reluctantly, that research is part of their job description. In the business world, it’s worse. For
instance, Geoff Shullenberger, a writing professor at New York University, reacted to my original 2013 essay with a blog pointing out that many businesses now feel that if there’s work that’s gratifying in any way at all, they really shouldn’t have to pay for it:
For Graeber, bullshit jobs carry with them a moral imperative: “If you’re not busy all the time doing something, anything—doesn’t really matter what it is—you’re a bad person.” But the flipside of that logic seems to be: if you actually like doing X activity, if it is valuable, meaningful, and carries intrinsic rewards for you, it is wrong for you to expect to be paid (well) for it; you should give it freely, even (especially) if by doing so you are allowing others to profit. In other words, we’ll make a living from you doing what you love (for free), but we’ll keep you in check by making sure you have to make a living doing what you hate.
Shullenberger gave the example of translation work. Translating a paragraph or document from one language to another—particularly from a dry business document—is not a task that many people would do for fun; still, one can imagine some reasons people might do it other than the money. (They are trying to perfect their language abilities, for example.) Therefore, most executives’ first instinct, upon hearing that translation work is required, is to try to see if they can’t find some way to make someone do it for free. Yet these very same executives are willing to shell out handsome salaries for “Vice Presidents for Creative Development” and the like, who do absolutely nothing. (In fact, such executives might themselves be Vice Presidents for Creative Development, and do nothing at all other than trying to figure out how to get others to do work for free.)
Shullenberger speaks of an emerging “voluntariat,” with capitalist firms increasingly harvesting the results not of paid labor but of unpaid interns, internet enthusiasts, activists, volunteers, and hobbyists, and “digitally sharecropping” the results of popular enthusiasm and creativity to privatize and market the results.22 The free software industry, perversely enough, has become a paradigm in this respect. The reader may recall Pablo, who introduced the notion of duct taping in chapter 2: software engineering work was divided between the interesting and challenging work of developing core technologies, and the tedious labor of “applying duct tape” to allow different core technologies to work together, because the designers had never bothered to think about their compatibility. His main point, though, was that, increasingly, open source means that all the really engaging tasks are done for free: