Page 34 of Enlightenment Now


  People tend to get happier as they get older (an age effect), presumably because they overcome the hurdles of embarking on adulthood and develop the wisdom to cope with setbacks and to put their lives in perspective.38 (They may pass through a midlife crisis on the way, or take a final slide in the last years of old age.)39 Happiness fluctuates with the times, especially the changing economy—not for nothing do economists call a composite of the inflation rate and the unemployment rate the Misery Index—and Americans have just dug themselves out of a trough that followed the Great Recession.40

  The pattern across the generations also has ups and downs. In two large samples, Americans born in every decade from the 1900s through the 1940s lived happier lives than those in the preceding cohort, presumably because the Great Depression left a scar on the generations who came of age as it deepened. The rise leveled off and then declined a bit with the Baby Boomers and early Generation X, the last generation that was old enough to allow the researchers to disentangle cohort from period.41 In a third study which continues to the present (the General Social Survey), happiness also dipped among the Baby Boomers but fully rebounded in Gen X and the Millennials.42 So while every generation agonizes about the kids today, younger Americans have in fact been getting happier. (As we saw in chapter 12, they have also become less violent and less druggy.) That makes three segments of the population that have become happier amid the American happiness stagnation: African Americans, the successive cohorts leading up to the Baby Boom, and young people today.

  The age-period-cohort tangle means that every historical change in well-being is at least three times as complicated as it appears. With that caveat in mind, let’s take a look at the claims that modernity has unleashed an epidemic of loneliness, suicide, and mental illness.

  * * *

  To hear the observers of the modern world tell it, Westerners have been getting lonelier. In 1950 David Riesman (together with Nathan Glazer and Reuel Denney) wrote the sociological classic The Lonely Crowd. In 1966 the Beatles wondered where all the lonely people come from, and where they all belong. In a 2000 bestseller the political scientist Robert Putnam noted that Americans were increasingly Bowling Alone. And in 2010 the psychiatrists Jacqueline Olds and Richard Schwartz wrote of The Lonely American (subtitle: Drifting Apart in the Twenty-First Century). For a member of gregarious Homo sapiens, social isolation is a form of torture, and the stress of loneliness a major risk to health and life.43 So it would be another joke on modernity if our newfound connectivity has left us lonelier than ever.

  One might think that social media could make up for whatever alienation and isolation came with the decline of large families and small communities. Today, after all, Eleanor Rigby and Father McKenzie could be Facebook friends. But in The Village Effect the psychologist Susan Pinker reviews research showing that digital friendships don’t provide the psychological benefits of face-to-face contact.

  This only heightens the mystery of why people would be getting lonelier. Among the world’s problems, social isolation would seem to be one of the easier ones to solve: just invite someone you know for a chat at a neighborhood Starbucks or around the kitchen table. Why would people fail to notice the opportunities? Have people today, especially the ever-maligned younger generation, become so addicted to digital crack cocaine that they forgo vital human contact and sentence themselves to needless and perhaps lethal loneliness? Could it really be true, as one social critic put it, that “we have given our hearts to machines, and are now turning into machines”? Has the Internet created, in the words of another, “an atomized world without human contact or emotion”?44 To anyone who believes there is such a thing as human nature, it seems unlikely, and the data show it is false: there is no loneliness epidemic.

  In Still Connected (2011), the sociologist Claude Fischer reviewed forty years of surveys that asked people about their social relationships. “The most striking thing about the data,” he noted, “is how consistent Americans’ ties to family and friends were between the 1970s and 2000s. We rarely find differences of more than a handful of percentage points either way that might describe lasting alterations in behavior with lasting personal consequences—yes, Americans entertained less at home and did more phone calling and emailing, but they did not change much on the fundamentals.”45 Though people have reallocated their time because families are smaller, more people are single, and more women work, Americans today spend as much time with relatives, have the same median number of friends and see them about as often, report as much emotional support, and remain as satisfied with the number and quality of their friendships as their counterparts in the decade of Gerald Ford and Happy Days. Users of the Internet and social media have more contact with friends (though a bit less face-to-face contact), and they feel that the electronic ties have enriched their relationships. Fischer concluded that human nature rules: “People try to adapt to changing circumstances so as to protect their most highly valued ends, which include sustaining the volume and quality of their personal relationships—time with children, contact with relatives, a few sources of intimate support.”46

  What about subjective feelings of loneliness? Surveys of the entire population are sparse; the data Fischer found suggested that “Americans’ expressions of loneliness remained the same or perhaps increased slightly,” mainly because more people were single.47 But surveys of students, a captive audience, are plentiful, and for decades they have indicated whether they agree with statements like “I am unhappy doing so many things alone” and “I have nobody to talk to.” The trends are summarized in the title of a 2015 article, “Declining Loneliness over Time,” and are shown in figure 18-2.

  Since these students were not tracked after they left school, we don’t know whether the decline in loneliness is a period effect, in which it has become steadily easier for young people to satisfy their social needs, or a cohort effect, in which recent generations are more socially satisfied and will remain so. What we do know is that young Americans are not suffering from “toxic levels of emptiness and aimlessness and isolation.”

  Together with “the kids today,” the perennial target of cultural pessimists is technology. In 2015 the sociologist Keith Hampton and his coauthors introduced a report on the psychological effects of social media by noting:

  For generations, commentators have worried about the impact of technology on people’s stress. Trains and industrial machinery were seen as noisy disruptors of pastoral village life that put people on edge. Telephones interrupted quiet times in homes. Watches and clocks added to the dehumanizing time pressures on factory workers to be productive. Radio and television were organized around the advertising that enabled modern consumer culture and heightened people’s status anxieties.48

  Figure 18-2: Loneliness, US students, 1978–2011

  Source: Clark, Loxton, & Tobin 2015. College students (left axis): Revised UCLA Loneliness Scale, trend line across many samples, taken from their fig. 1. High school students (right axis): Mean rating of six loneliness items from the Monitoring the Future survey, triennial means, taken from their fig. 4. Each axis spans half a standard deviation, so the slopes of the college and high school curves are commensurable, but their relative heights are not.

  And so it was inevitable that the critics would shift their focus to social media. But social media can be neither credited nor blamed for the changes in loneliness among American students shown in figure 18-2: the decline proceeded from 1977 through 2009, and the Facebook explosion did not come until 2006. Nor, according to the new surveys, have adults become isolated because of social media. Users of social media have more close friends, express more trust in people, feel more supported, and are more politically involved.49 And notwithstanding the rumor that they are drawn into an anxious competition to keep up with the furious rate of enjoyable activities of their digital faux-friends, social media users do not report higher levels of stress than non-users.50 On the contrary, the wom
en among them are less stressed, with one telling exception: they get upset when they learn that someone they care about has suffered an illness, a death in the family, or some other setback. Social media users care too much, not too little, about other people, and they empathize with them over their troubles rather than envying them their successes.

  Modern life, then, has not crushed our minds and bodies, turned us into atomized machines suffering from toxic levels of emptiness and isolation, or set us drifting apart without human contact or emotion. How did this hysterical misconception arise? Partly it came out of the social critic’s standard formula for sowing panic: Here’s an anecdote, therefore it’s a trend, therefore it’s a crisis. But partly it came from genuine changes in how people interact. People see each other less in traditional venues like clubs, churches, unions, fraternal organizations, and dinner parties, and more in informal gatherings and via digital media. They confide in fewer distant cousins but in more co-workers. They are less likely to have a large number of friends but also less likely to want a large number of friends.51 But just because social life looks different today from the way it looked in the 1950s, it does not mean that humans, that quintessentially social species, have become any less social.

  * * *

  Suicide, one might think, is the most reliable measure of societal unhappiness, in the same way that homicide is the most reliable measure of societal conflict. A person who has died by suicide must have suffered from unhappiness so severe that he or she decided that a permanent end to consciousness was preferable to enduring it. Also, suicides can be tabulated objectively in a way that the experience of unhappiness cannot.

  But in practice, suicide rates are often inscrutable. The very sadness and agitation from which suicide would be a release also addles a person’s judgment, so what ought to be the ultimate existential decision often hinges on the mundane matter of how easy it is to carry out the act. Dorothy Parker’s macabre poem “Resumé” (which ends, “Guns aren’t lawful; Nooses give; Gas smells awful; You might as well live”) is disconcertingly close to the mindset of a person contemplating suicide. A country’s suicide rate can soar or plummet when a convenient and effective method is widely available or taken away, such as coal gas in England in the first half of the 20th century, pesticides in many developing countries, and guns in the United States.52 Suicides increase during economic downturns and political turmoil, not surprisingly, but they are also affected by the weather and the number of daylight hours, and they increase when the media normalize or romanticize recent instances.53 Even the innocuous idea that suicide is an assay for unhappiness may be questioned. A recent study documented a “happiness-suicide paradox” in which happier American states and happier Western countries have slightly higher, rather than lower, suicide rates.54 (The researchers speculate that misery loves company: a personal setback is more painful when everyone around you is happy.) Suicide rates can be capricious for yet another reason. Suicides are often hard to distinguish from accidents (particularly when the cause is a poisoning or drug overdose, but also when it is a fall, a car crash, or a gunshot), and coroners may tilt their classifications in times and places in which suicide is stigmatized or criminalized.

  We do know that suicide is a major cause of death. In the United States there are more than 40,000 suicides a year, making it the tenth-leading cause of death, and worldwide there are about 800,000, making it the fifteenth-leading cause.55 Yet the trends over time and the differences among countries are hard to fathom. In addition to the age-cohort-period snarl, the lines for men and women often go in different directions. Though the suicide rate for women in developed countries fell by more than 40 percent between the mid-1980s and 2013, men kill themselves at around four times the rate of women, so the numbers for men tend to push the overall trends around.56 And no one knows why, for example, the world’s most suicidal countries are Guyana, South Korea, Sri Lanka, and Lithuania, nor why France’s rate shot up from 1976 to 1986 and fell back down by 1999.

  But we know enough to debunk two popular beliefs. The first is that suicide has been steadily rising and has now reached historically high, unprecedented, crisis, or epidemic proportions. Suicide was common enough in the ancient world to have been debated by the Greeks and to have figured in the biblical stories of Samson, Saul, and Judas. Historical data are scarce, not least because suicide, also called “self-murder,” used to be a crime in many countries, including England until 1961. But the data go back more than a century in England, Switzerland, and the United States, and I have plotted them in figure 18-3.

  The annual suicide rate in England was 13 per 100,000 in 1863; it hit peaks of around 19 in the first decade of the 20th century and more than 20 during the Great Depression, plunged during World War II and again in the 1960s, and then fell more gradually to 7.4 in 2007. Switzerland, too, saw a decline of more than twofold, from 24 in 1881 and 27 during the Depression to 12.2 in 2013. The United States suicide rate peaked at around 17 in the early 20th century and again during the Depression before falling to 10.5 at the turn of the millennium, followed by a rise after the recent Great Recession to 13.

  Figure 18-3: Suicide, England, Switzerland, and US, 1860–2014

  Sources: England (including Wales): Thomas & Gunnell 2010, fig. 1, average of male and female rates, provided by Kylie Thomas. The series has not been extended because the data are not commensurable with current records. Switzerland, 1880–1959: Ajdacic-Gross et al. 2006, fig. 1. Switzerland, 1960–2013: WHO Mortality Database, OECD 2015b. United States, 1900–1998: Centers for Disease Control, Carter et al. 2000, table Ab950. United States, 1999–2014: Centers for Disease Control 2015.

  So in all three countries for which we have historical data, suicide was more common in the past than it is today. The visible crests and troughs are the surface of a churning sea of ages, cohorts, periods, and sexes.57 Suicide rates rise sharply during adolescence and then more gently into middle age, where they peak for females (perhaps because they face menopause and an empty nest) and then fall back down, while staying put for males before shooting up in their retirement years (perhaps because they face an end to their traditional role as providers). Part of the recent increase in the American suicide rate can be attributed to the aging of the population, with the large cohort of Boomer males moving into their most suicide-prone years. But the cohorts themselves matter as well. The GI and Silent generations were more reluctant to kill themselves than the Victorian cohorts that preceded them and the Boomers and Gen-Xers that followed them. The Millennials appear to be slowing or reversing the generational rise; adolescent suicide rates fell between the early 1990s and the first decades of the 21st century.58 The times themselves (adjusting for ages and cohorts) have become less conducive to suicide since the peaks around the turn of the 20th century, the 1930s, and the late 1960s to early 1970s; they dropped to a forty-year low in 1999, though we have seen a slight rise again since the Great Recession. This complexity belies the alarmism of the recent New York Times headline “U.S. Suicide Rate Surges to a 30-Year High,” which could also have been titled “Despite the Recession and an Aging Population, U.S. Suicide Rate Is a Third Lower Than Previous Peaks.”59

  Together with the belief that modernity makes people want to kill themselves, the other great myth about suicide is that Sweden, that paragon of Enlightenment humanism, has the world’s highest suicide rate. This urban legend originated (according to what might be another urban legend) in a speech by Dwight Eisenhower in 1960 in which he called out Sweden’s high suicide rate and blamed it on the country’s paternalistic socialism.60 I myself would have blamed the bleak existential films of Ingmar Bergman, but both theories are explanations in search of a fact to explain. Though Sweden’s suicide rate in 1960 was higher than that of the United States (15.2 versus 10.8 per 100,000), it was never the world’s highest, and it has since fallen to 11.1, which is below the world average (11.6) and the rate for the United States (12.1) and in fifty-eig
hth place overall.61 A recent review of suicide rates across the world noted that “generally the suicide trend has been downward in Europe and there are currently no Western European welfare states in the world top ten for suicide rates.”62

  * * *

  Everyone occasionally suffers from depression, and some people are stricken with major depression, in which the sadness and hopelessness last more than two weeks and interfere with carrying on with life. In recent decades, more people have been diagnosed with depression, especially in younger cohorts, and the conventional wisdom is captured in the tag line of a recent public television documentary: “A silent epidemic is ravaging the nation and killing our kids.” We have just seen that the nation is not suffering from an epidemic of unhappiness, loneliness, or suicide, so an epidemic of depression seems unlikely, and it turns out to be an illusion.