Bullshit Jobs
Brendan: A lot of these student work jobs have us doing some sort of bullshit task like scanning IDs, or monitoring empty rooms, or cleaning already-clean tables. Everyone is cool with it, because we get money while we study, but otherwise there’s absolutely no reason not to just give students the money and automate or eliminate the work.
I’m not altogether familiar with how the whole thing works, but a lot of this work is funded by the Feds and tied to our student loans. It’s part of a whole federal system designed to assign students a lot of debt—thereby promising to coerce them into labor in the future, as student debts are so hard to get rid of—accompanied by a bullshit education program designed to train and prepare us for our future bullshit jobs.
Brendan has a point, and I’ll be returning to his analysis in a later chapter. Here, though, I want to focus on what students forced into these make-work jobs actually learn from them—lessons that they do not learn from more traditional student occupations and pursuits such as studying for tests, planning parties, and so on. Even judging by Brendan’s and Patrick’s accounts (and I could easily reference many others), I think we can conclude that from these jobs, students learn at least five things:
1. how to operate under others’ direct supervision;
2. how to pretend to work even when nothing needs to done;
3. that one is not paid money to do things, however useful or important, that one actually enjoys;
4. that one is paid money to do things that are in no way useful or important and that one does not enjoy; and
5. that at least in jobs requiring interaction with the public, even when one is being paid to carry out tasks one does not enjoy, one also has to pretend to be enjoying it.
This is what Brendan meant by how make-work student employment was a way of “preparing and training” students for their future bullshit jobs. He was studying to be a high school history teacher—a meaningful job, certainly, but, as with almost all teaching positions in the United States, one where the proportion of hours spent teaching in class or preparing lessons has declined, while the total number of hours dedicated to administrative tasks has increased dramatically. This is what Brendan is suggesting: that it’s no coincidence that the more jobs requiring college degrees become suffused in bullshit, the more pressure is put on college students to learn about the real world by dedicating less of their time to self-organized goal-directed activity and more of it to tasks that will prepare them for the more mindless aspects of their future careers.
why many of our fundamental assumptions on human motivation appear to be incorrect
I do not think there is any thrill that can go through the human heart like that felt by the inventor as he sees some creation of the brain unfolding to success . . . such emotions make a man forget food, sleep, friends, love, everything.
—Nikola Tesla
If the argument of the previous section is correct, one could perhaps conclude that Eric’s problem was just that he hadn’t been sufficiently prepared for the pointlessness of the modern workplace. He had passed through the old education system—some traces of it are left—designed to prepare students to actually do things. This led to false expectations and an initial shock of disillusionment that he could not overcome.
Perhaps. But I don’t think that’s the full story. There is something much deeper going on here. Eric might have been unusually ill-prepared to endure the meaninglessness of his first job, but just about everyone does see such meaninglessness as something to be endured—despite the fact that we are all trained, in one way or another, to assume that human beings should be perfectly delighted to find themselves in his situation of being paid good money not to work.
Let us return to our initial problem. We may begin by asking why we assume that someone being paid to do nothing should consider himself fortunate. What is the basis of that theory of human nature from which this follows? The obvious place to look is at economic theory, which has turned this kind of thought into a science. According to classical economic theory, homo oeconomicus, or “economic man”—that is, the model human being that lies behind every prediction made by the discipline—is assumed to be motivated above all by a calculus of costs and benefits. All the mathematical equations by which economists bedazzle their clients, or the public, are founded on one simple assumption: that everyone, left to his own devices, will choose the course of action that provides the most of what he wants for the least expenditure of resources and effort. It is the simplicity of the formula that makes the equations possible: if one were to admit that humans have complicated motivations, there would be too many factors to take into account, it would be impossible to properly weight them, and predictions could not be made. Therefore, while an economist will say that while of course everyone is aware that human beings are not really selfish, calculating machines, assuming that they are makes it possible to explain a very large proportion of what humans do, and this proportion—and only this—is the subject matter of economic science.
This is a reasonable statement as far as it goes. The problem is there are many domains of human life where the assumption clearly doesn’t hold—and some of them are precisely in the domain of what we like to call the economy. If “minimax” (minimize cost, maximize benefit) assumptions were correct, people like Eric would be delighted with their situation. He was receiving a lot of money for virtually zero expenditure of resources and energy—basically bus fare, plus the amount of calories it took to walk around the office and answer a couple of calls. Yet all the other factors (class, expectations, personality, and so on) don’t determine whether someone in that situation would be unhappy—since it would appear that just about anyone in that situation would be unhappy. They only really affect how unhappy they will be.
Much of our public discourse about work starts from the assumption that the economists’ model is correct. People have to be compelled to work; if the poor are to be given relief so they don’t actually starve, it has to be delivered in the most humiliating and onerous ways possible, because otherwise they would become dependent and have no incentive to find proper jobs.4 The underlying assumption is that if humans are offered the option to be parasites, of course they’ll take it.
In fact, almost every bit of available evidence indicates that this is not the case. Human beings certainly tend to rankle over what they consider excessive or degrading work; few may be inclined to work at the pace or intensity that “scientific managers” have, since the 1920s, decided they should; people also have a particular aversion to being humiliated. But leave them to their own devices, and they almost invariably rankle even more at the prospect of having nothing useful to do.
There is endless empirical evidence to back this up. To choose a couple of particularly colorful examples: working-class people who win the lottery and find themselves multimillionaires rarely quit their jobs (and if they do, usually they soon say they regret it).5 Even in those prisons where inmates are provided free food and shelter and are not actually required to work, denying them the right to press shirts in the prison laundry, clean latrines in the prison gym, or package computers for Microsoft in the prison workshop is used as a form of punishment—and this is true even where the work doesn’t pay or where prisoners have access to other income.6 Here we are dealing with people who can be assumed to be among the least altruistic society has produced, yet they find sitting around all day watching television a far worse fate than even the harshest and least rewarding forms of labor.
The redeeming aspect of prison work is, as Dostoyevsky noted, that at least it was seen to be useful—even if it is not useful to the prisoner himself.
Actually, one of the few positive side effects of a prison system is that, simply by providing us with information of what happens, and how humans behave under extreme situations of deprivation, we can learn basic truths about what it means to be human. To take another example: we now know that placing prisoners in solitary confinement for more than six months at a stretch in
evitably results in physically observable forms of brain damage. Human beings are not just social animals; they are so intrinsically social that if they are cut off from relations with other humans, they begin to decay physically.
I suspect the work experiment can be seen in similar terms. Humans may or may not be cut out for regular nine-to-five labor discipline—it seems to me that there is considerable evidence that they aren’t—but even hardened criminals generally find the prospect of just sitting around doing nothing even worse.
Why should this be the case? And just how deeply rooted are such dispositions in human psychology? There is reason to believe the answer is: very deep indeed.
• • •
As early as 1901, the German psychologist Karl Groos discovered that infants express extraordinary happiness when they first figure out they can cause predictable effects in the world, pretty much regardless of what that effect is or whether it could be construed as having any benefit to them. Let’s say they discover that they can move a pencil by randomly moving their arms. Then they realize they can achieve the same effect by moving in the same pattern again. Expressions of utter joy ensue. Groos coined the phrase “the pleasure at being the cause,” suggesting that it is the basis for play, which he saw as the exercise of powers simply for the sake of exercising them.
This discovery has powerful implications for understanding human motivation more generally. Before Groos, most Western political philosophers—and after them, economists and social scientists—had been inclined either to assume that humans seek power simply because of an inherent desire for conquest and domination, or else for a purely practical desire to guarantee access to the sources of physical gratification, safety, or reproductive success. Groos’s findings—which have since been confirmed by a century of experimental evidence—suggested maybe there was something much simpler behind what Nietzsche called the “will to power.” Children come to understand that they exist, that they are discrete entities separate from the world around them, largely by coming to understand that “they” are the thing which just caused something to happen—the proof of which is the fact that they can make it happen again.7 Crucially, too, this realization is, from the very beginning, marked with a species of delight that remains the fundamental background of all subsequent human experience.8 It is hard perhaps to think of our sense of self as grounded in action because when we are truly engrossed in doing something—especially something we know how to do very well, from running a race to solving a complicated logical problem—we tend to forget that we exist. But even as we dissolve into what we do, the foundational “pleasure at being the cause” remains, as it were, the unstated ground of our being.
Groos himself was primarily interested in asking why humans play games, and why they become so passionate and excited over the outcome even when they know it makes no difference who wins or loses outside the confines of the game itself. He saw the creation of imaginary worlds as simply an extension of his core principle. This might be so. But what we’re concerned with here, unfortunately, is less with the implications for healthy development and more with what happens when something goes terribly wrong. In fact, experiments have also shown that if one first allows a child to discover and experience the delight in being able to cause a certain effect, and then suddenly denies it to them, the results are dramatic: first rage, refusal to engage, and then a kind of catatonic folding in on oneself and withdrawing from the world entirely. Psychiatrist and psychoanalyst Francis Broucek called this the “trauma of failed influence” and suspected that such traumatic experiences might lie behind many mental health issues later in life.9
If this is so, then it begins to give us a sense of why being trapped in a job where one is treated as if one were usefully employed, and has to play along with the pretense that one is usefully employed, but at the same time, is keenly aware one is not usefully employed, would have devastating effects. It’s not just an assault on the person’s sense of self-importance but also a direct attack on the very foundations of the sense that one even is a self. A human being unable to have a meaningful impact on the world ceases to exist.
a brief excursus on the history of make-work and particularly of the concept of buying other people’s time
Boss: How come you’re not working?
Worker: There’s nothing to do.
Boss: Well, you’re supposed to pretend like you’re working.
Worker: Hey, I got a better idea. Why don’t you pretend like I’m working? You get paid more than me.
—Bill Hicks comedy routine
Groos’s theory of “the pleasure at being the cause” led him to devise a theory of play as make-believe: humans invent games and diversions, he proposed, for the exact same reason the infant takes delight in his ability to move a pencil. We wish to exercise our powers as an end in themselves. The fact that the situation is made up doesn’t detract from this; in fact, it adds another level of contrivance. This, Groos suggested—and here he was falling back on the ideas of Romantic German philosopher Friedrich Schiller—is really all that freedom is. (Schiller argued that the desire to create art is simply a manifestation of the urge to play as the exercise of freedom for its own sake as well.10) Freedom is our ability to make things up just for the sake of being able to do so.
Yet at the same time, it is precisely the make-believe aspect of their work that student workers like Patrick and Brendan find the most infuriating—indeed, that just about anyone who’s ever had a wage-labor job that was closely supervised invariably finds the most maddening aspect of her job. Working serves a purpose, or is meant to do so. Being forced to pretend to work just for the sake of working is an indignity, since the demand is perceived—rightly—as the pure exercise of power for its own sake. If make-believe play is the purest expression of human freedom, make-believe work imposed by others is the purest expression of lack of freedom. It’s not entirely surprising, then, that the first historical evidence we have for the notion that certain categories of people really ought to be working at all times, even if there’s nothing to do, and that work needs to be made up to fill their time, even if there’s nothing that really needs doing, refers to people who are not free: prisoners and slaves, two categories that historically have largely overlapped.11
• • •
It would be fascinating, though probably impossible, to write a history of make-work—to explore when and in what circumstances “idleness” first came to be seen as a problem, or even a sin. I’m not aware that anyone has actually tried to do this.12 But all evidence we have indicates that the modern form of make-work that Patrick and Brendan are complaining about is historically new. This is in part because most people who have ever existed have assumed that normal human work patterns take the form of periodic intense bursts of energy, followed by relaxation, followed by slowly picking up again toward another intense bout. This is what farming is like, for instance: all-hands-on-deck mobilization around planting and harvest, but otherwise, whole seasons taken up largely by minding and mending things, minor projects, and puttering around. But even daily tasks, or projects such as building a house or preparing for a feast, tend to take roughly this form. In other words, the traditional student’s pattern of lackadaisical study leading up to intense cramming before exams and then slacking off again—I like to refer to it as “punctuated hysteria”—is typical of how human beings have always tended to go about necessary tasks if no one forces them to act otherwise.13 Some students may engage in cartoonishly exaggerated versions of this pattern.14 But good students figure out how to get the pace roughly right. Not only is it what humans will do if left to their own devices, but there is no reason to believe that forcing them to act otherwise is likely to cause greater efficiency or productivity. Often it will have precisely the opposite effect.
Obviously, some tasks are more dramatic and therefore lend themselves better to alternating intense, frenetic bursts of activity and relative torpor. This has always been true. Hunting animals is
more demanding than gathering vegetables, even if the latter is done in sporadic bursts; building houses better lends itself to heroic efforts than cleaning them. As these examples imply, in most human societies, men tend to try, and usually succeed, to monopolize the most exciting, dramatic kinds of work—they’ll set the fires that burn down the forest on which they plant their fields, for example, and, if they can, relegate to women the more monotonous and time-consuming tasks, such as weeding. One might say that men will always take for themselves the kind of jobs one can tell stories about afterward, and try to assign women the kind you tell stories during.15 The more patriarchal the society, the more power men have over women, the more this will tend to be the case. The same pattern tends to reproduce itself whenever one group clearly is in a position of power over another, with very few exceptions. Feudal lords, insofar as they worked at all, were fighters16—their lives tended to alternate between dramatic feats of arms and near-total idleness and torpor. Peasants and servants obviously were expected to work more steadily. But even so, their work schedule was nothing remotely as regular or disciplined as the current nine-to-five—the typical medieval serf, male or female, probably worked from dawn to dusk for twenty to thirty days out of any year, but just a few hours a day otherwise, and on feast days, not at all. And feast days were not infrequent.
The main reason why work could remain so irregular was because it was largely unsupervised. This is true not only of medieval feudalism but also of most labor arrangements anywhere until relatively recent times. It was true even if those labor arrangements were strikingly unequal. If those on the bottom produced what was required of them, those on top didn’t really feel they should have to be bothered knowing what that entailed. We see this again quite clearly in gender relations. The more patriarchal a society, the more segregated men’s and women’s quarters will also tend to be; as a result, the less men tend to know about women’s work, and certainly, the less able men would be able to perform women’s work if the women were to disappear. (Women, in contrast, usually are well aware of what men’s work entails and are often able to get on quite well were the men for some reason to vanish—this is why in so many past societies, large percentages of the male population could take off for long periods for war or trade without causing any significant disruption.) Insofar as women in patriarchal societies were supervised, they were supervised by other women. Now, this did often involve a notion that women, unlike men, should keep themselves busy all the time. “Idle fingers knit sweaters for the devil,” my great-grandmother used to warn her daughter back in Poland. But this kind of traditional moralizing is actually quite different from the modern “If you have time to lean, you have time to clean,” because its underlying message is not that you should be working but that you shouldn’t be doing anything else. Essentially, my great-grandmother was saying that anything a teenage girl in a Polish shtetl might be getting up to when she wasn’t knitting was likely to cause trouble. Similarly, one can find occasional warnings by nineteenth-century plantation owners in the American South or the Caribbean that it’s better to keep slaves busy even at made-up tasks than to allow them to idle about in the off-season; the reason given always being that if slaves were left with time on their hands, they were likely to start plotting to flee or revolt.