The Hedgehog, the Fox, and the Magister's Pox
Interestingly, Wilson starts modestly (page 230) with a statement that I cannot gainsay, and that harmonizes with the central argument of this book, although Wilson does begin to give his preferences away when he speaks of science’s “proprietary sense of the future”—a property that I do not deny, by the way—as a clear “one up” over anything the arts may do in this supposedly equal union (page 230):Scholars in the humanities should lift the anathema placed on reductionism. Scientists are not conquistadors out to melt the Inca gold. Science is free and the arts are free, and as I argued in the earlier account of mind, the two domains, despite the similarities in their creative spirit, have radically different goals and methods. The key to the exchange between them is . . . reinvigoration of interpretation with the knowledge of science and its proprietary sense of the future. Interpretation is the logical channel of consilient explanation between science and the arts.
Yet, as his argument develops, Wilson begins to claim more and more territory for natural science in resolving questions in the arts. Just three pages beyond this conciliatory statement, Wilson proposes that consonance with the epigenetic rules of human cognitive function may explain “enduring value” in art. Now, if by “enduring value” Wilson only wishes to make a purely empirical (even measurable) claim about how long, and by how many, a work has been treasured, then he may still be treading within the proper magisterium of science. But if he wishes to conflate such factual conformity to epigenetic rules with “enduring value” in the more usual normative sense of aesthetic worth, then I think that he has run aground on the mudbank of a logical divide:Works of enduring value are those truest to these origins. It follows that even the greatest works of art might be understood fundamentally with knowledge of the biologically evolved epigenetic rules that guided them.
Finally, Wilson develops his evolutionary speculations on the adaptive advantage offered by art as the emotional basis for incorporating, by natural selection, certain cognitive universals into the epigenetic rules of human nature. Although I remain unattracted by such basically speculative forms of evolutionary argument, I find Wilson’s thoughts both plausible and interesting, albeit unsupported at present. But, at this climax in his discussion of the arts, Wilson now takes the illogical plunge by converting these legitimate speculations about factual and evolutionary origins into explicit claims about the meaning of beauty and truth in art. He begins by arguing that our rapidly increasing intelligence assured our survival and domination, but also exacted a great price (page 245):This is the picture of the origin of the arts that appears to be emerging. The most distinctive qualities of the human species are extremely high intelligence, language, culture, and reliance on long-term social contracts. In combination they gave early Homo sapiens a decisive edge over all competing animal species, but they also exacted a price we continue to pay, composed of the shocking recognition of the self, of the finiteness of personal existence, and of the chaos of the environment.
“The dominating influence that spawned the arts,” Wilson then adds (page 245), “was the need to impose order on the confusion caused by intelligence.” We couldn’t achieve this control by using our immense brains as flexible computers, and therefore had to encode more-specific cognitive norms of adaptive benefit: “The evolving brain, nevertheless, could not convert to general intelligence alone; it could not turn into an all-purpose computer. So in the course of evolution the animal instincts of survival and reproduction were transformed into the epigenetic algorithms of human nature. It was necessary to keep in place these inborn programs for the rapid acquisition of language, sexual conduct, and other processes of mental development. Had the algorithms been erased, the species would have faced extinction.”
But these algorithms, or basic rules of human nature, were too few, too sketchy, and too general to maintain the necessary order all by themselves. So they gained expression as art, thus evoking emotions common and powerful enough to imbue the algorithms themselves with sufficient sway over human actions and propensities (page 246):Algorithms could be built, but they weren’t numerous and precise enough to respond automatically and optimally to every possible event. The arts filled the gap. Early humans invented them in an attempt to express and control through magic the abundance of the environment, the power of solidarity, and other forces in their lives that mattered most to survival and reproduction.
In a final paragraph, Wilson makes a doubly false transition: first, from this speculative theory about origins to a claim about current and continuing utility of the arts; second, and more serious, from a claim in the magisterium of science about the emotional utility of art to a definition of truth and beauty in the magisterium of aesthetics. I may admire the boldness and abruptness of the final claim, but words don’t boil rice, and facts of nature or cognition cannot establish a consensus about what art should define as the “beautiful,” not to mention the “true.”The arts were the means by which these forces could be ritualized and expressed in a new, simulated reality. They drew consistency from their faithfulness to human nature, to the emotion-guided epigenetic rules—the algorithms—of mental development. They achieved that fidelity by selecting the most evocative words, images, and rhythms, conforming to the emotional guides of the epigenetic rules, making the right moves. The arts still perform this primal function, and in much the same ancient way. Their quality is measured by their humanness, by the precision of their adherence to human nature. To an overwhelming degree that is what we mean when we speak of the true and beautiful in the arts [page 246].
Turning to ethics, Wilson bases his discussion upon a dichotomy of possible positions that, I thought, had been superseded and gently set aside long ago (with some exceptions, as in Clarence Thomas’s defense of “natural law” in explaining his legal views in hearings for his appointment to the Supreme Court; although Thomas won by the thinnest of margins, I don’t think that this aspect of his testimony aided his cause—not, to make the obvious point, that such issues have any bearing on so basically political a matter). In contrasting positions that he calls “transcendental” and “empirical,” Wilson argues that ethics either record human experience and represent our valid distillation of workable rules for human conduct (the “empirical” view, which would then make ethical precepts subject to factual adjudication and potential reduction to the natural sciences), or derive from a “higher” or more general source independent of our lives, and imposed a priori by some universal abstraction or divine will. Wilson begins his discussion by stating his support for the empirical alternative (page 260):Centuries of debate on the origin of ethics come down to this: Either ethical precepts, such as justice and human rights, are independent of human experience or else they are human inventions. . . . The true answer will eventually be reached by the accumulation of objective evidence. Moral reasoning, I believe, is at every level intrinsically consilient with the natural sciences.
I regard this setting of the argument as strange, or at least peripheral to the major issue in discussing whether (and how) ethics might shake hands with science. I have little doubt that, on factual matters that might be included in the “anthropology of ethics,” the empirical position must prevail, whatever evolutionary reconstruction or interpretation we eventually give to the origin and initial meaning of moral precepts. That is (and I hardly know what other position a modern thinker could take, even a conventionally devout person who has never doubted that ethical truth resides in God’s proclamations), I assume that if we surveyed the world’s cultures and found that certain ethical principles tended to prevail, we could hypothesize that these principles served a useful function in social organization. If we then found any genetic predisposition for behaviors best suited in practicing these principles, we could also specify a biological and evolutionary linkage to the origin of such beliefs.
Indeed, ever since reading David Hume as an undergraduate, trying like hell to prove him wrong (and failing utterly), I have strongly supported the notion that humans must poss
ess some sort of “moral sense” as an aspect of what we call human nature, and as more than merely analogous with other basic attributes of sight, sound, et cetera. Since ethical “truths” are, in principle, unprovable in any sense that science can recognize (Hume’s point, if I understand him aright), I don’t know how else we could explain the commonality of certain preferences among various cultures, unless we propose their embodiment in something legitimately called a moral sense.
But how can these propositions address what has always been the crucial and heartrending question about ethics: “How ought we behave?”—an entirely different matter from “How do most of us act?” The “is” of the anthropology of morals (a scientific subject) just doesn’t lead me to the “ought” of the morality of morals (a nonscientific subject usually placed in the bailiwick of the humanities).
Wilson, of course, knows that reservoirs of ink have been filled with discussion about whether factual matters can be directly translated into normative or ethical judgments—the famous (or infamous) distinction of “is” and “ought,” termed “the naturalistic fallacy” by the early-twentieth-century philosopher G. E. Moore, who evidently argued, in devising his name, that such transitions could not be logically accomplished. But Wilson glosses this issue of the ages by simply stating, more or less, that one obviously can make the move from “is” to “ought” (a prerequisite, needless to say, for the success, or even for the existence, of his program for consilience), and that he can’t quite see what all the fuss has been about. In advocating this easy bridge between the anthropology of morals and the morality of morals, Wilson defends what he calls the empiricist position (page 262):Ethics, in the empiricist view, is conduct favored consistently enough throughout a society to be expressed as a code of principles. It is driven by hereditary predispositions in mental development—the “moral sentiments” of the Enlightenment philosophers—causing broad convergence across cultures, while reaching precise form in each culture according to historical circumstance. The codes, whether judged by outsiders as good or evil, play an important role in determining which cultures flourish, and which will decline. The importance of the empiricist view is its emphasis on objective knowledge. . . . The choice between transcendentalism and empiricism will be the coming century’s version of the struggle for men’s souls. Moral reasoning will either remain centered in idioms of theology and philosophy, where it is now, or it will shift toward science-based material analysis. Where it settles will depend on which world view is proved correct, or at least which is more widely perceived to be correct.
In an even more incisive statement, Wilson defends the subsumption of ethics into the natural sciences, but then falls into the classical, and still disabling, fallacy in his last line (page 273): For if ought is not is, what is? To translate is into ought makes sense if we attend to the objective meaning of ethical precepts. They are very unlikely to be ethereal messages outside humanity awaiting revelation, or independent truths vibrating in a nonmaterial dimension of the mind. They are more likely to be physical products of the brain and culture. From the consilient perspective of the natural sciences, they are no more than principles of the social contract hardened into rules and dictates, the behavioral codes that members of a society fervently wish others to follow and are willing to accept themselves for the common good.
The argument might just work if we could define “the common good”—the goal of ethical behavior, as Wilson seems to grant—in the objective and empirical terms that subsumption of ethics into the natural sciences inevitably requires. For, once one defines “the common good,” then empirical inquiry can determine which behaviors may best achieve the stated goals, and whether (and how) societies have established their ethical rules to reach those ends. But how can we define “the common good,” the source of all subsequent arguments, in empirical terms that science may study? Frankly, I don’t think that we can—and neither did Hume; nor did G. E. Moore; nor have legions of scholars in the humanities (and the sciences too, for that matter) who have struggled with this issue for centuries, and decided that no single holy grail can exist if several separate streams flow with immiscible waters across the common landscape of our search for wisdom.
How could “the common good” be rendered empirically? The effort stumbles and collapses on the problem that spawned such terms as “the naturalistic fallacy.” As I have argued before in this book (see page 142), how can empiricism prevail as the ground of ethics if we discover that most societies, at most times, have condoned as righteous (and validated by ethical rules) a wide variety of beliefs and behaviors—including infanticide, xenophobia (sometimes leading all the way to genocide), and domination and differential punishment of various physically “weaker” groups, including women and children—that most of us strongly wish to repudiate today, with the repudiation, moreover, regarded as the very foundation of a better ethical system? Shall we say that most societies have just been empirically wrong during most of human history—and that we now know better, in much the same way that we once defended a geocentric cosmos and then learned that the earth circles the sun?
Then, in an even more troubling question (that, I suspect, will find a positive answer in empirical terms, and far too often to grant us comfort): how can empiricism prevail as a basis for ethics if we then discover that Homo sapiens has indeed evolved biological propensities for the very behaviors that we now wish to repudiate and abjure? What can we say, at this plausible point, except that the empirical anthropology of morals led most societies to a set of precepts with evolutionary origins that may once have made good sense in terms of Darwinian survival—whereas most people have subsequently decided that better morality would lead us to precisely opposite behaviors? How, then, can we avoid the conclusion that the morality of morals (the basis for our decision to forswear an aspect of human nature) must be validated on a basis different from the factual reasons that led our ancestors to adopt moral codes now deemed fit only for rejection on ethical grounds?
At this point, one can hardly avoid the question of questions: If factual nature cannot establish the basis of moral truth, where then can we find it? I don’t feel excessively evasive or stupid in admitting that I have struggled with this deepest of issues all my conscious life, and although I can summarize the classical positions offered by our best thinkers through history, I have never been able to formulate anything new or better. After all, if David Hume, and others ten times smarter than I could ever be, have similarly struggled and basically failed, I need not berate myself for coming no closer. I only rejoice that the great majority of good and sensible people in this world seem able to reach a basic consensus on a few central precepts embodied in what we call respect, dignity, and reciprocity, a minimal foundation for enough space and freedom to attempt an ethical life. And if most of these principles sound “negative,” as in primum non nocere (above all, do no harm), or represent what philosophers call hypothetical rather than categorical imperatives (that is, statements like the Golden Rule based on negotiation and reciprocity rather than upon a priori absolutes), then I say bravo for the human decency (an aspect, no doubt, of the “moral sense”) that allows us to build reasonable lives on such a flexible and minimal foundation.
Finally, although I reject the possibility of deriving moral principles from empirical study of nature and human evolution, I certainly do not view the divide between “is” and “ought” as utterly impermeable in the sense of claiming that facts can have no relevance for moral thought (although I would defend a strictly logical impermeability in terms of direct movement from natural fact to moral precept). Empirical data will enter any serious discussion of moral principles for a set of obvious reasons, with two rather simple and silly examples listed here as mere placeholders for the generality. First, although technically not illogical, we would be pretty damned stupid (and condemned to utter frustration) if we decided to define something as morally blessed and ethically necessary, even though factual nature declared the feat i
mpossible to attain—as if, for example, we declared the ability to throw a baseball two hundred miles per hour as the chief desideratum of human virtue. Second, and not by any means so inane (but still obvious as the most important impact of factual constraint upon moral struggle), we need to know the factual biology of human nature, if only to gain a better understanding of what will be difficult, and to avoid disappointment at the depth of our struggle, when we properly decide to ascribe moral importance to behaviors that are hard to achieve because they run counter to inborn propensities—as (in a plausible Darwinian inference) for certain forms of cooperation that reduce our own salary or noticeablity, but confer no obvious advantages through the attention or respect gained from others for our altruistic actions.
Wilson, however, still seems to feel that if he can specify the historical origin of ethics empirically—a genuine possibility that I regard with optimism—he has solved the basic problem of morality and established a basis for the reduction of ethical philosophy to the natural sciences within his grand chain of consilience. He writes, for example (pages 274–75): “If the empiricist world view is correct, ought is just shorthand for one kind of factual statement, a word that denotes what society first chose (or was coerced) to do, and then codified. . . . Ought is the product of a material process. The solution points the way to an objective grasp of the origin of ethics.”