Diagnoses of the malaise of the humanities rightly point to anti-intellectual trends in our culture and to the commercialization of universities. But an honest appraisal would have to acknowledge that some of the damage is self-inflicted. The humanities have yet to recover from the disaster of postmodernism, with its defiant obscurantism, self-refuting relativism, and suffocating political correctness. Many of its luminaries—Nietzsche, Heidegger, Foucault, Lacan, Derrida, the Critical Theorists—are morose cultural pessimists who declare that modernity is odious, all statements are paradoxical, works of art are tools of oppression, liberal democracy is the same as fascism, and Western civilization is circling the drain.54
With such a cheery view of the world, it’s not surprising that the humanities often have trouble defining a progressive agenda for their own enterprise. Several university presidents and provosts have lamented to me that when a scientist comes into their office, it’s to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it’s to plead for respect for the way things have always been done. Those ways do deserve respect, and there can be no replacement for the close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding?
A consilience with science offers the humanities many possibilities for new insight. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, and a forward-looking agenda that could attract ambitious young talent (not to mention appealing to deans and donors). The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanities scholars.
In some fields, this consilience is a fait accompli. Archaeology has grown from a branch of art history to a high-tech science. The philosophy of mind shades into mathematical logic, computer science, cognitive science, and neuroscience. Linguistics combines philological scholarship on the history of words and grammatical constructions with laboratory studies of speech, mathematical models of grammar, and the computerized analysis of large corpora of writing and conversation.
Political theory, too, has a natural affinity with the sciences of mind. “What is government,” asked James Madison, “but the greatest of all reflections on human nature?” Social, political, and cognitive scientists are reexamining the connections between politics and human nature, which were avidly debated in Madison’s time but submerged during an interlude in which humans were treated as blank slates or rational actors. Humans, we now know, are moralistic actors: they are guided by intuitions about authority, tribe, and purity; are committed to sacred beliefs that express their identity; and are driven by conflicting inclinations toward revenge and reconciliation. We are starting to grasp why these impulses evolved, how they are implemented in the brain, how they differ among individuals, cultures, and subcultures, and which conditions turn them on and off.55
Comparable opportunities beckon in other areas of the humanities. The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces, landscapes, and geometric forms.56 Music scholars have much to discuss with the scientists who study the perception of speech, the structure of language, and the brain’s analysis of the auditory world.57
As for literary scholarship, where to begin?58 John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Cognitive psychology can shed light on how readers reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture, and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries which are the drivers of plot. All these ideas can help add new depth to Dryden’s observation about fiction and human nature.
Though many concerns in the humanities are best appreciated with traditional narrative criticism, some raise empirical questions that can be informed by data. The advent of data science applied to books, periodicals, correspondence, and musical scores has inaugurated an expansive new “digital humanities.”59 The possibilities for theory and discovery are limited only by the imagination, and include the origin and spread of ideas, networks of intellectual and artistic influence, the contours of historical memory, the waxing and waning of themes in literature, the universality or culture-specificity of archetypes and plots, and patterns of unofficial censorship and taboo.
The promise of a unification of knowledge can be fulfilled only if knowledge flows in all directions. Some of the scholars who have recoiled from scientists’ forays into explaining art are correct that these explanations have been, by their standards, shallow and simplistic. All the more reason for them to reach out and combine their erudition about individual works and genres with scientific insight into human emotions and aesthetic responses. Better still, universities could train a new generation of scholars who are fluent in each of the two cultures.
Although humanities scholars themselves tend to be receptive to insights from science, many policemen of the Second Culture proclaim that they may not indulge such curiosity. In a dismissive review in the New Yorker of a book by the literary scholar Jonathan Gottschall on the evolution of the narrative instinct, Adam Gopnik writes, “The interesting questions about stories . . . are not about what makes a taste for them ‘universal,’ but what makes the good ones so different from the dull ones. . . . This is a case, as with women’s fashion, where the subtle, ‘surface’ differences are actually the whole of the subject.”60 But in appreciating literature, must connoisseurship really be the whole of the subject? An inquisitive spirit might also be curious about the recurring ways in which minds separated by culture and era deal with the timeless conundrums of human existence.
Wieseltier, too, has issued crippling diktats on what scholarship in the humanities may not do, such as make progress. “The vexations of philosophy . . . are not retired,” he declared; “errors [are] not corrected and discarded.”61 In fact, most moral philosophers today would say that the old arguments defending slavery as a natural institution are errors which have been corrected and discarded. Epistemologists might add that their field has progressed from the days when Descartes could argue that human perception is veridical because God would not deceive us. Wieseltier further stipulates that there is a “momentous distinction between the study of the natural world and the study of the human world,” and any move to “transgress the borders between realms” could only make the humanities the “handmaiden of the sciences,” because “a scientific explanation will expose the underlying sameness” and “absorb all the realms into a single realm, into their realm.” Where does this paranoia and territoriality lead? In a major essay in the New York Times Book Review, Wieseltier called for a worldview that is pre-Darwinian—“the irreducibility of the human difference to any aspect of our animality”—indeed, pre-Copernican—“the centrality of humankind to the universe.”62
Let’s hope that artists and scholars don’t follow their self-appointed defenders over this cliff. Our quest to come to terms with the h
uman predicament need not be frozen in the last century or the century before, let alone the Middle Ages. Surely our theories of politics, culture, and morality have much to learn from our best understanding of the universe and our makeup as a species.
In 1778 Thomas Paine extolled the cosmopolitan virtues of science:
Science, the partisan of no country, but the beneficent patroness of all, has liberally opened a temple where all may meet. Her influence on the mind, like the sun on the chilled earth, has long been preparing it for higher cultivation and further improvement. The philosopher of one country sees not an enemy in the philosophy of another: he takes his seat in the temple of science, and asks not who sits beside him.63
What he wrote about the physical landscape applies as well to the landscape of knowledge. In this and other ways, the spirit of science is the spirit of the Enlightenment.
CHAPTER 23
HUMANISM
Science is not enough to bring about progress. “Everything that is not forbidden by laws of nature is achievable, given the right knowledge”—but that’s the problem. “Everything” means everything: vaccines and bioweapons, video on demand and Big Brother on the telescreen. Something in addition to science ensured that vaccines were put to use in eradicating diseases while bioweapons were outlawed. That’s why I preceded the epigraph from David Deutsch with the one from Spinoza: “Those who are governed by reason desire nothing for themselves which they do not also desire for the rest of humankind.” Progress consists of deploying knowledge to allow all of humankind to flourish in the same way that each of us seeks to flourish.
The goal of maximizing human flourishing—life, health, happiness, freedom, knowledge, love, richness of experience—may be called humanism. (Despite the word’s root, humanism doesn’t exclude the flourishing of animals, but this book focuses on the welfare of humankind.) It is humanism that identifies what we should try to achieve with our knowledge. It provides the ought that supplements the is. It distinguishes true progress from mere mastery.
There is a growing movement called Humanism, which promotes a non-supernatural basis for meaning and ethics: good without God.1 Its aims have been stated in a trio of manifestoes starting in 1933. The Humanist Manifesto III, from 2003, affirms:
Knowledge of the world is derived by observation, experimentation, and rational analysis. Humanists find that science is the best method for determining this knowledge as well as for solving problems and developing beneficial technologies. We also recognize the value of new departures in thought, the arts, and inner experience—each subject to analysis by critical intelligence.
Humans are an integral part of nature, the result of unguided evolutionary change. . . . We accept our life as all and enough, distinguishing things as they are from things as we might wish or imagine them to be. We welcome the challenges of the future, and are drawn to and undaunted by the yet to be known.
Ethical values are derived from human need and interest as tested by experience. Humanists ground values in human welfare shaped by human circumstances, interests, and concerns and extended to the global ecosystem and beyond. . . .
Life’s fulfillment emerges from individual participation in the service of humane ideals. We . . . animate our lives with a deep sense of purpose, finding wonder and awe in the joys and beauties of human existence, its challenges and tragedies, and even in the inevitability and finality of death. . . .
Humans are social by nature and find meaning in relationships. Humanists . . . strive toward a world of mutual care and concern, free of cruelty and its consequences, where differences are resolved cooperatively without resorting to violence. . . .
Working to benefit society maximizes individual happiness. Progressive cultures have worked to free humanity from the brutalities of mere survival and to reduce suffering, improve society, and develop global community. . . .2
The members of Humanist associations would be the first to insist that the ideals of humanism belong to no sect. Like Molière’s bourgeois gentleman who was delighted to learn he had been speaking prose all his life, many people are humanists without realizing it.3 Strands of humanism may be found in belief systems that go back to the Axial Age. They came to the fore during the Age of Reason and the Enlightenment, leading to the English, French, and American statements of rights, and got a second wind after World War II, inspiring the United Nations, the Universal Declaration of Human Rights, and other institutions of global cooperation.4 Though humanism does not invoke gods, spirits, or souls to ground meaning and morality, it is by no means incompatible with religious institutions. Some Eastern religions, including Confucianism and varieties of Buddhism, always grounded their ethics in human welfare rather than divine dictates. Many Jewish and Christian denominations have become humanistic, soft-pedaling their legacy of supernatural beliefs and ecclesiastical authority in favor of reason and universal human flourishing. Examples include the Quakers, Unitarians, liberal Episcopalians, Nordic Lutherans, and Reform, Reconstructionist, and Humanistic branches of Judaism.
Humanism may seem bland and unexceptionable—who could be against human flourishing? But in fact it is a distinctive moral commitment, one that does not come naturally to the human mind. As we shall see, it is vehemently opposed not just by many religious and political factions but, amazingly, by eminent artists, academics, and intellectuals. If humanism, like the other Enlightenment ideals, is to retain its hold on people’s minds, it must be explained and defended in the language and ideas of the current era.
* * *
Spinoza’s dictum is one of a family of principles that have sought a secular foundation for morality in impartiality—in the realization that there’s nothing magic about the pronouns I and me that could justify privileging my interests over yours or anyone else’s.5 If I object to being raped, maimed, starved, or killed, I can’t very well rape, maim, starve, or kill you. Impartiality underlies many attempts to construct morality on rational grounds: Spinoza’s viewpoint of eternity, Hobbes’s social contract, Kant’s categorical imperative, Rawls’s veil of ignorance, Nagel’s view from nowhere, Locke and Jefferson’s self-evident truth that all people are created equal, and of course the Golden Rule and its precious-metallic variants, rediscovered in hundreds of moral traditions.6 (The Silver Rule is “Don’t do to others what you don’t want done to yourself”; the Platinum Rule, “Do to others what they would have you do to them.” They are designed to anticipate masochists, suicide bombers, differences in taste, and other sticking points for the Golden Rule.)
To be sure, the argument from impartiality is incomplete. If there were a callous, egoistic, megalomaniacal sociopath who could exploit everyone else with impunity, no argument could convince him he had committed a logical fallacy. Also, arguments from impartiality have little content. Aside from a generic advisory to respect people’s wishes, the arguments say little about what those wishes are: the wants, needs, and experiences that define human flourishing. These are the desiderata that should not just be impartially allowed but actively sought and expanded for as many people as possible. Recall that Martha Nussbaum filled this gap by laying out a list of “fundamental capabilities” that people have the right to exercise, such as longevity, health, safety, literacy, knowledge, free expression, play, nature, and emotional and social attachments. But this is just a list, and it leaves the list-maker open to the objection that she is just enumerating her favorite things. Can we put humanistic morality on a deeper foundation—one that would rule out rational sociopaths and justify the human needs we are obligated to respect? I think we can.
According to the Declaration of Independence, the rights to life, liberty, and the pursuit of happiness are “self-evident.” That’s a bit unsatisfying, because what’s “self-evident” isn’t always self-evident. But it captures a key intuition. There would indeed be something perverse about having to justify life itself in the course of examining the foundations of morality, as if it were
an open question whether one gets to finish the sentence or be shot. The very act of examining anything presupposes that one is around to do the examining. If Nagel’s transcendental argument about the non-negotiability of reason has merit—that the act of considering the validity of reason presupposes the validity of reason—then surely it presupposes the existence of reasoners.
This opens the door to deepening our humanistic justification of morality with two key ideas from science, entropy and evolution. Traditional analyses of the social contract imagined a colloquy among disembodied souls. Let’s enrich this idealization with the minimal premise that the reasoners exist in the physical universe. Much follows.
These incarnate beings must have defied the staggering odds against matter arranging itself into a thinking organ by being products of natural selection, the only physical process capable of producing complex adaptive design.7 And they must have defied the ravages of entropy long enough to be able to show up for the discussion and persist through it. That means they have taken in energy from the environment, stayed within a narrow envelope of conditions consistent with their physical integrity, and fended off assaults from living and nonliving dangers. As products of natural and sexual selection they must be the scions of a deeply rooted tree of replicators, each of whom won a mate and bore viable offspring. Since intelligence is not a wonder algorithm but is fed by knowledge, they must be driven to sop up information about the world and to be attentive to its nonrandom patterning. And if they are exchanging ideas with other rational entities, they must be on speaking terms: they must be social beings who risk time and safety in interacting with one another.8