Can We Ever Be "Right" About Right and Wrong?

  The philosopher and neuroscientist Joshua Greene has done some of the most influential neuroimaging research on morality. 13 While Greene wants to understand the brain processes that govern our moral lives, he believes that we should be skeptical of moral realism on metaphysical grounds. For Greene, the question is not, "How can you know for sure that your moral beliefs are true?" but rather, "How could it be that any one's moral beliefs are true?" In other words, what is it about the world that could make a moral claim true or false? 14 He appears to believe that the answer to this question is "nothing."

  However, it seems to me that this question is easily answered. Moral view A is truer than moral view B, if A entails a more accurate understanding of the connections between human thoughts/intentions/ behavior and human well-being. Does forcing women and girls to wear burqas make a net positive contribution to human well-being? Does it produce happier boys and girls? Does it produce more compassionate men or more contented women? Does it make for better relationships between men and women, between boys and their mothers, or between girls and their fathers? I would bet my life that the answer to each of these questions is "no." So, I think, would many scientists. And yet, as we have seen, most scientists have been trained to think that such judgments are mere expressions of cultural bias—and, thus, unscientific in principle. Very few of us seem willing to admit that such simple, moral truths increasingly fall within the scope of our scientific worldview. Greene articulates the prevailing skepticism quite well:

  Moral judgment is, for the most part, driven not by moral reasoning, but by moral intuitions of an emotional nature. Our capacity for moral judgment is a complex evolutionary adaptation to an intensely social life. We are, in fact, so well adapted to making moral judgments that our making them is, from our point of view, rather easy, a part of "common sense." And like many of our common sense abilities, our ability to make moral judgments feels to us like a perceptual ability, an ability, in this case, to discern immediately and reliably mind-independent moral facts. As a result, we are naturally inclined toward a mistaken belief in moral realism. The psychological tendencies that encourage this false belief serve an important biological purpose, and that explains why we should find moral realism so attractive even though it is false. Moral realism is, once again, a mistake we were born to make. 15

  Greene alleges that moral realism assumes that "there is sufficient uniformity in people's underlying moral outlooks to warrant speaking as if there is a fact of the matter about what's 'right' or 'wrong,' 'just' or 'unjust.'" 16 But do we really need to assume such uniformity for there to be right answers to moral questions? Is physical or biological realism predicated on "sufficient uniformity in people's underlying [physical or biological] outlooks"? Taking humanity as a whole, I am quite certain that there is a greater consensus that cruelty is wrong (a common moral precept) than the passage of time varies with velocity (special relativity) or that humans and lobsters share a common ancestor (evolution). Should we doubt whether there is a "fact of the matter" with respect to these physical and biological truth claims? Does the general ignorance about the special theory of relativity or the pervasive disinclination of Americans to accept the scientific consensus on evolution put our scientific worldview, even slightly, in question? 17

  Greene notes that it is often difficult to get people to agree about moral truth, or to even get an individual to agree with himself in different contexts. These tensions lead him to the following conclusion:

  Moral theorizing fails because our intuitions do not reflect a coherent set of moral truths and were not designed by natural selection or anything else to behave as if they were ... If you want to make sense of your moral sense, turn to biology, psychology, and sociology—not normative ethics. 18

  This objection to moral realism may seem reasonable, until one notices that it can be applied, with the same leveling effect, to any domain of human knowledge. For instance, it is just as true to say that our logical, mathematical, and physical intuitions have not been designed by natural selection to track the Truth. 19 Does this mean that we must cease to be realists with respect to physical reality? We need not look far in science to find ideas and opinions that defy easy synthesis. There are many scientific frameworks (and levels of description) that resist integration and which divide our discourse into areas of specialization, even pitting Nobel laureates in the same discipline against one another. Does this mean that we can never hope to understand what is really going on in the world? No. It means the conversation must continue. 20

  Total uniformity in the moral sphere—either interpersonally or intrapersonally—may be hopeless. So what? This is precisely the lack of closure we face in all areas of human knowledge. Full consensus as a scientific goal only exists in the limit, at a hypothetical end of inquiry. Why not tolerate the same open-endedness in our thinking about human well-being?

  Again, this does not mean that all opinions about morality are justified. To the contrary—the moment we accept that there are right and wrong answers to questions of human well-being, we must admit that many people are simply wrong about morality. The eunuchs who tended the royal family in Chinas Forbidden City, dynasty after dynasty, seem to have felt generally well compensated for their lives of arrested development and isolation by the influence they achieved at court—as well as by the knowledge that their genitalia, which had been preserved in jars all the while, would be buried with them after their deaths, ensuring them rebirth as human beings. When confronted with such an exotic point of view, a moral realist would like to say we are witnessing more than a mere difference of opinion: we are in the presence of moral error. It seems to me that we can be reasonably confident that it is bad for parents to sell their sons into the service of a government that intends to cut off their genitalia "using only hot chili sauce as a local anesthetic." 21 This would mean that Sun Yaoting, the emperor's last eunuch, who died in 1996 at the age of ninety-four, was wrong to harbor, as his greatest regret, "the fall of the imperial system he had aspired to serve." Most scientists seem to believe that no matter how maladaptive or masochistic a person's moral commitments, it is impossible to say that he is ever mistaken about what constitutes a good life.

  Moral Paradox

  One of the problems with consequentialism in practice is that we cannot always determine whether the effects of an action will be bad or good. In fact, it can be surprisingly difficult to decide this even in retrospect.

  Dennett has dubbed this problem "the Three Mile Island Effect." 22 Was the meltdown at Three Mile Island a bad outcome or a good one? At first glance, it surely seems bad, but it might have also put us on a path toward greater nuclear safety, thereby saving many lives. Or it might have caused us to grow dependent on more polluting technologies, contributing to higher rates of cancer and to global climate change. Or it might have produced a multitude of effects, some mutually reinforcing, and some mutually canceling. If we cannot determine the net result of even such a well-analyzed event, how can we judge the likely consequences of the countless decisions we must make throughout our lives?

  One difficulty we face in determining the moral valence of an event is that it often seems impossible to determine whose well-being should most concern us. People have competing interests, mutually incompatible notions of happiness, and there are many well-known paradoxes that leap into our path the moment we begin thinking about the welfare of whole populations. As we are about to see, population ethics is a notorious engine of paradox, and no one, to my knowledge, has come up with a way of assessing collective well-being that conserves all of our intuitions. As the philosopher Patricia Churchland puts it, "no one has the slightest idea how to compare the mild headache of five million against the broken legs of two, or the needs of one's own two children against the needs of a hundred unrelated brain-damaged children in Serbia." 23

  Such puzzles may seem of mere academic interest, until we realize that population ethic
s governs the most important decisions societies ever make. What are our moral responsibilities in times of war, when diseases spread, when millions suffer famine, or when global resources are scarce? These are moments in which we have to assess changes in collective welfare in ways that purport to be rational and ethical. Just how motivated should we be to act when 250,000 people die in an earthquake on the island of Haiti? Whether we know it or not, intuitions about the welfare of whole populations determine our thinking on these matters.

  Except, that is, when we simply ignore population ethics—as, it seems, we are psychologically disposed to do. The work of the psychologist

  Paul Slovic and colleagues has uncovered some rather startling limitations on our capacity for moral reasoning when thinking about large groups of people—or, indeed, about groups larger than one. 24 As Slovic observes, when human life is threatened, it seems both rational and moral for our concern to increase with the number of lives at stake. And if we think that losing many lives might have some additional negative consequences (like the collapse of civilization), the curve of our concern should grow steeper still. But this is not how we characteristically respond to the suffering of other human beings.

  Slovic's experimental work suggests that we intuitively care most about a single, identifiable human life, less about two, and we grow more callous as the body count rises. Slovic believes that this "psychic numbing" explains the widely lamented fact that we are generally more distressed by the suffering of single child (or even a single animal) than by a proper genocide. What Slovic has termed "genocide neglect"—our reliable failure to respond, both practically and emotionally, to the most horrific instances of unnecessary human suffering—represents one of the more perplexing and consequential failures of our moral intuition.

  Slovic found that when given a chance to donate money in support of needy children, subjects give most generously and feel the greatest empathy when told only about a single child's suffering. When presented with two needy cases, their compassion wanes. And this diabolical trend continues: the greater the need, the less people are emotionally affected and the less they are inclined to give.

  Of course, charities have long understood that putting a face on the data will connect their constituents to the reality of human suffering and increase donations. Slovic's work has confirmed this suspicion, which is now known as the "identifiable victim effect." 25 Amazingly, however, adding information about the scope of a problem to these personal appeals proves to be counterproductive. Slovic has shown that setting the story of a single needy person in the context of wider human need reliably diminishes altruism.

  The fact that people seem to be reliably less concerned when faced with an increase in human suffering represents an obvious violation of moral norms. The important point, however, is that we immediately recognize how indefensible this allocation of emotional and material resources is once it is brought to our attention. What makes these experimental findings so striking is that they are patently inconsistent: if you care about what happens to one little girl, and you care about what happens to her brother, you must, at the very least, care as much about their combined fate. Your concern should be (in some sense) cumulative. 26 When your violation of this principle is revealed, you will feel that you have committed a moral error. This explains why results of this kind can only be obtained between subjects (where one group is asked to donate to help one child and another group is asked to support two); we can be sure that if we presented both questions to each participant in the study, the effect would disappear (unless subjects could be prevented from noticing when they were violating the norms of moral reasoning).

  Clearly, one of the great tasks of civilization is to create cultural mechanisms that protect us from the moment-to-moment failures of our ethical intuitions. We must build our better selves into our laws, tax codes, and institutions. Knowing that we are generally incapable of valuing two children more than either child alone, we must build a structure that reflects and enforces our deeper understanding of human well-being. This is where a science of morality could be indispensable to us: the more we understand the causes and constituents of human fulfillment, and the more we know about the experiences of our fellow human beings, the more we will be able to make intelligent decisions about which social policies to adopt.

  For instance, there are an estimated 90,000 people living on the streets of Los Angeles. Why are they homeless? How many of these people are mentally ill? How many are addicted to drugs or alcohol? How many have simply fallen through the cracks in our economy? Such questions have answers. And each of these problems admits of a range of responses, as well as false solutions and neglect. Are there policies we could adopt that would make it easy for every person in the United States to help alleviate the problem of homelessness in their own communities? Is there some brilliant idea that no one has thought of that would make people want to alleviate the problem of homelessness more than they want to watch television or play video games?

  Would it be possible to design a video game that could help solve the problem of homelessness in the real world? 27 Again, such questions open onto a world of facts, whether or not we can bring the relevant facts into view.

  Clearly, morality is shaped by cultural norms to a great degree, and it can be difficult to do what one believes to be right on one's own. A friend s four-year-old daughter recently observed the role that social support plays in making moral decisions:

  "It's so sad to eat baby lambies," she said as she gnawed greedily on a lamb chop.

  "So, why don't you stop eating them?" her father asked.

  "Why would they kill such a soft animal? Why wouldn't they kill some other kind of animal?"

  "Because," her father said, "people like to eat the meat. Like you are, right now."

  His daughter reflected for a moment—still chewing her lamb— and then replied:

  "It's not good. But I can't stop eating them if they keeping killing them."

  And the practical difficulties for consequentialism do not end here. When thinking about maximizing the well-being of a population, are we thinking in terms of total or average well-being? The philosopher Derek Parfit has shown that both bases of calculation lead to troubling paradoxes. 28 If we are concerned only about total welfare, we should prefer a world with hundreds of billions of people whose lives are just barely worth living to a world in which 7 billion of us live in perfect ecstasy. This is the result of Parfit's famous argument known as "The Repugnant Conclusion." 29 If, on the other hand, we are concerned about the average welfare of a population, we should prefer a world containing a single, happy inhabitant to a world of billions who are only slightly less happy; it would even suggest that we might want to painlessly kill many of the least happy people currently alive, thereby increasing the average of human well-being. Privileging average welfare would also lead us to prefer a world in which billions live under the misery of constant torture to a world in which only one person is tortured ever-so-slightly more. It could also render the morality of an action dependent upon the experience of unaffected people. As Parfit points out, if we care about the average over time, we might deem it morally wrong to have a child today whose life, while eminently worth living, would not compare favorably to the lives of the ancient Egyptians. Parfit has even devised scenarios in which everyone alive could have a lower quality of life than they otherwise would and yet the average quality of life will have increased. 30 Clearly, this proves that we cannot rely on a simple summation or averaging of welfare as our only metric. And yet, at the extremes, we can see that human welfare must aggregate in some way: it really is better for all of us to be deeply fulfilled than it is for everyone to live in absolute agony.

  Placing only consequences in our moral balance also leads to indelicate questions. For instance, do we have a moral obligation to come to the aid of wealthy, healthy, and intelligent hostages before poor, sickly, and slow-witted ones? After all, the former are more likely to make a positive con
tribution to society upon their release. And what about remaining partial to one's friends and family? Is it wrong for me to save the life of my only child if, in the process, I neglect to save a stranger's brood of eight? Wrestling with such questions has convinced many people that morality does not obey the simple laws of arithmetic.

  However, such puzzles merely suggest that certain moral questions could be difficult or impossible to answer in practice; they do not suggest that morality depends upon something other than the consequences of our actions and intentions. This is a frequent source of confusion: consequentialism is less a method of answering moral questions than it is a claim about the status of moral truth. Our assessment of consequences in the moral domain must proceed as it does in all others: under the shadow of uncertainty, guided by theory, data, and honest conversation. The fact that it may often be difficult, or even impossible, to know what the consequences of our thoughts and actions will be does not mean that there is some other basis for human values that is worth worrying about.

  Such difficulties notwithstanding, it seems to me quite possible that we will one day resolve moral questions that are often thought to be unanswerable. For instance, we might agree that having a preference for one's intimates is better (in that it increases general welfare) than being fully disinterested as to how consequences accrue. Which is to say that there may be some forms of love and happiness that are best served by each of us being specially connected to a subset of humanity. This certainly appears to be descriptively true of us at present. Communal experiments that ignore parents' special attachment to their own children, for instance, do not seem to work very well. The Israeli kibbutzim learned this the hard way: after discovering that raising children communally made both parents and children less happy, they reinstated the nuclear family. 31 Most people may be happier in a world in which a natural bias toward one's own children is conserved—presumably in the context of laws and social norms that disregard this bias. When I take my daughter to the hospital, I am naturally more concerned about her than I am about the other children in the lobby. I do not, however, expect the hospital staff to share my bias. In fact, given time to reflect about it, I realize that I would not want them to. How could such a denial of my self-interest actually be in the service of my self-interest? Well, first, there are many more ways for a system to be biased against me than in my favor, and I know that I will benefit from a fair system far more than I will from one that can be easily corrupted. I also happen to care about other people, and this experience of empathy deeply matters to me. I feel better as a person valuing fairness, and I want my daughter to become a person who shares this value. And how would I feel if the physician attending my daughter actually shared my bias for her and viewed her as far more important than the other patients under his care? Frankly, it would give me the creeps.