But how will these claims and behaviors—compelling as they will be—relate to the subjective experience of nonbiological humans? We keep coming back to the very real but ultimately unmeasurable (by fully objective means) issue of consciousness. People often talk about consciousness as if it were a clear property of an entity that can readily be identified, detected, and gauged. If there is one crucial insight that we can make regarding why the issue of consciousness is so contentious, it is the following:

  There exists no objective test that can conclusively determine its presence.

  Science is about objective measurements and their logical implications, but the very nature of objectivity is that you cannot measure subjective experience—you can only measure correlates of it, such as behavior (and by behavior, I include internal behavior—that is, the actions of the components of an entity, such as neurons and their many parts). This limitation has to do with the very nature of the concepts of “objectivity” and “subjectivity.” Fundamentally we cannot penetrate the subjective experience of another entity with direct objective measurement. We can certainly make arguments about it, such as, “Look inside the brain of this nonbiological entity; see how its methods are just like those of a human brain.” Or, “See how its behavior is just like human behavior.” But in the end, these remain just arguments. No matter how convincing the behavior of a nonbiological person, some observers will refuse to accept the consciousness of such an entity unless it squirts neurotransmitters, is based on DNA-guided protein synthesis, or has some other specific biologically human attribute.

  We assume that other humans are conscious, but even that is an assumption. There is no consensus among humans about the consciousness of nonhuman entities, such as higher animals. Consider the debates regarding animal rights, which have everything to do with whether animals are conscious or just quasi machines that operate by “instinct.” The issue will be even more contentious with regard to future nonbiological entities that exhibit behavior and intelligence even more humanlike than those of animals.

  In fact these future machines will be even more humanlike than humans today. If that seems like a paradoxical statement, consider that much of human thought today is petty and derivative. We marvel at Einstein’s ability to conjure up the theory of general relativity from a thought experiment or Beethoven’s ability to imagine symphonies that he could never hear. But these instances of human thought at its best are rare and fleeting. (Fortunately we have a record of these fleeting moments, reflecting a key capability that has separated humans from other animals.) Our future primarily nonbiological selves will be vastly more intelligent and so will exhibit these finer qualities of human thought to a far greater degree.

  So how will we come to terms with the consciousness that will be claimed by nonbiological intelligence? From a practical perspective such claims will be accepted. For one thing, “they” will be us, so there won’t be any clear distinctions between biological and nonbiological intelligence. Furthermore, these nonbiological entities will be extremely intelligent, so they’ll be able to convince other humans (biological, nonbiological, or somewhere in between) that they are conscious. They’ll have all the delicate emotional cues that convince us today that humans are conscious. They will be able to make other humans laugh and cry. And they’ll get mad if others don’t accept their claims. But this is fundamentally a political and psychological prediction, not a philosophical argument.

  I do take issue with those who maintain that subjective experience either doesn’t exist or is an inessential quality that can safely be ignored. The issue of who or what is conscious and the nature of the subjective experiences of others are fundamental to our concepts of ethics, morality, and law. Our legal system is based largely on the concept of consciousness, with particularly serious attention paid to actions that cause suffering—an especially acute form of conscious experience—to a (conscious) human or that end the conscious experience of a human (for example, murder).

  Human ambivalence regarding the ability of animals to suffer is reflected in legislation as well. We have laws against animal cruelty, with greater emphasis given to more intelligent animals, such as primates (although we appear to have a blind spot with regard to the massive animal suffering involved in factory farming, but that’s the subject of another treatise).

  My point is that we cannot safely dismiss the question of consciousness as merely a polite philosophical concern. It is at the core of society’s legal and moral foundation. The debate will change when a machine—nonbiological intelligence—can persuasively argue on its own that it/he/she has feelings that need to be respected. Once it can do so with a sense of humor—which is particularly important for convincing others of one’s humanness—it is likely that the debate will be won.

  I expect that actual change in our legal system will come initially from litigation rather than legislation, as litigation often precipitates such transformations. In a precursor of what is to come, attorney Martine Rothblatt, a partner in Mahon, Patusky, Rothblatt & Fisher, filed a mock motion on September 16, 2003, to prevent a corporation from disconnecting a conscious computer. The motion was argued in a mock trial in the biocyberethics session at the International Bar Association conference.10

  We can measure certain correlates of subjective experience (for example, certain patterns of objectively measurable neurological activity with objectively verifiable reports of certain subjective experiences, such as hearing a sound). But we cannot penetrate to the core of subjective experience through objective measurement. As I mentioned in chapter 1, we are dealing with the difference between third-person “objective” experience, which is the basis of science, and first-person “subjective” experience, which is a synonym for consciousness.

  Consider that we are unable to truly experience the subjective experiences of others. The experience-beaming technology of 2029 will enable the brain of one person to experience only the sensory experiences (and potentially some of the neurological correlates of emotions and other aspects of experience) of another person. But that will still not convey the same internal experience as that undergone by the person beaming the experience, because his or her brain is different. Every day we hear reports about the experiences of others, and we may even feel empathy in response to the behavior that results from their internal states. But because we’re exposed to only the behavior of others, we can only imagine their subjective experiences. Because it is possible to construct a perfectly consistent, scientific worldview that omits the existence of consciousness, some observers come to the conclusion that it’s just an illusion.

  Jaron Lanier, the virtual-reality pioneer, takes issue (in the third of his six objections to what he calls “cybernetic totalism” in his treatise “One Half a Manifesto”) with those who maintain “that subjective experience either doesn’t exist, or is unimportant because it is some sort of ambient or peripheral effect.”11 As I pointed out, there is no device or system we can postulate that could definitively detect subjectivity (conscious experience) associated with an entity. Any such purported device would have philosophical assumptions built into it. Although I disagree with much of Lanier’s treatise (see the “Criticism from Software” section in chapter 9), I concur with him on this issue and can even imagine (and empathize with!) his feelings of frustration at the dictums of “cybernetic totalists” such as myself (not that I accept this characterization).12 Like Lanier I even accept the subjective experience of those who maintain that there is no such thing as subjective experience.

  Precisely because we cannot resolve issues of consciousness entirely through objective measurement and analysis (science), a critical role exists for philosophy. Consciousness is the most important ontological question. After all, if we truly imagine a world in which there is no subjective experience (a world in which there is swirling stuff but no conscious entity to experience it), that world may as well not exist. In some philosophical traditions, both Eastern (certain schools of Buddhist thought,
for example), and Western (specifically, observer-based interpretations of quantum mechanics), that is exactly how such a world is regarded.

  RAY: We can debate what sorts of entities are or can be conscious. We can argue about whether consciousness is an emergent property or caused by some specific mechanism, biological or otherwise. But there’s another mystery associated with consciousness, perhaps the most important one.

  MOLLY 2004: Okay, I’m all ears.

  RAY: Well, even if we assume that all humans who seem to be conscious in fact are, why is my consciousness associated with this particular person, me? Why am I conscious of this particular person who read Tom Swift Jr. books as a child, got involved with inventions, writes books about the future, and so on? Every morning that I wake up, I have the experiences of this specific person. Why wasn’t I Alanis Morissette or someone else?

  SIGMUND FREUD: Hmm, so you’d like to be Alanis Morissette?

  RAY: That’s an interesting proposition, but that’s not my point.

  MOLLY 2004: What is your point? I don’t understand.

  RAY: Why am I conscious of the experiences and decisions of this particular person?

  MOLLY 2004: Because, silly, that’s who you are.

  SIGMUND: It seems that there’s something about yourself that you don’t like. Tell me more about that.

  MOLLY 2004: Earlier, Ray didn’t like being human altogether.

  RAY: I didn’t say I don’t like being human. I said I didn’t like the limitations, problems, and high level of maintenance of my version 1.0 body. But this is all beside the point that I’m trying to make here.

  CHARLES DARWIN: You wonder why you’re you? That’s a tautology, there’s not much to wonder about.

  RAY: Like many attempts to express the really “hard” problems of consciousness, this is sounding meaningless. But if you ask me what I really wonder about, it is this: why am I continually aware of this particular person’s experiences and feelings? As for other people’s consciousness, I accept it, but I don’t experience other people’s experiences, not directly anyway.

  SIGMUND: Okay, I’m getting a clearer picture now. You don’t experience other people’s experiences? Have you ever talked to someone about empathy?

  RAY: Look, I’m talking about consciousness now in a very personal way.

  SIGMUND: That’s good, keep going.

  RAY: Actually, this is a good example of what typically happens when people try to have a dialogue about consciousness. The discussion inevitably veers off into something else, like psychology or behavior or intelligence or neurology. But the mystery of why I am this particular person is what I really wonder about.

  CHARLES: You know you do create who you are.

  RAY: Yes, that’s true. Just as our brains create our thoughts, our thoughts in turn create our brains.

  CHARLES: So you’ve made yourself, and that’s why you are who you are, so to speak.

  MOLLY 2104: We experience that very directly in 2104. Being nonbiological, I’m able to change who I am quite readily. As we discussed earlier, if I’m in the mood, I can combine my thought patterns with someone else’s and create a merged identity. It’s a profound experience.

  MOLLY 2004: Well, Miss Molly of the future, we do that back in the primitive days of 2004 also. We call it falling in love.

  Who Am I? What Am I?

  Why are you you?

  —THE IMPLIED QUESTION IN THE ACRONYM YRUU (YOUNG RELIGIOUS UNITARIAN UNIVERSALISTS), AN ORGANIZATION I WAS ACTIVE IN WHEN I WAS GROWING UP IN THE EARLY 1960S (IT WAS THEN CALLED LRY, LIBERAL RELIGIOUS YOUTH).

  What you are looking for is who is looking.

  —SAINT FRANCIS OF ASSISI

  I’m not aware of too many things

  I know what I know if you know what I mean.

  Philosophy is the talk on a cereal box.

  Religion is the smile on a dog. . . .

  Philosophy is a walk on the slippery rocks.

  Religion is a light in the fog. . . .

  What I am is what I am.

  Are you what you are or what?

  —EDIE BRICKELL, “WHAT I AM”

  Freedom of will is the ability to do gladly that which I must do.

  —CARL JUNG

  The chance of the quantum theoretician is not the ethical freedom of the Augustinian.

  —NORBERT WIENER13

  I should prefer to an ordinary death, being immersed with a few friends in a cask of Madeira, until that time, then to be recalled to life by the solar warmth of my dear country! But in all probability, we live in a century too little advanced, and too near the infancy of science, to see such an art brought in our time to its perfection.

  —BENJAMIN FRANKLIN, 1773

  A related but distinct question has to do with our own identities. We talked earlier about the potential to upload the patterns of an individual mind—knowledge, skills, personality, memories—to another substrate. Although the new entity would act just like me, the question remains: is it really me?

  Some of the scenarios for radical life extension involve reengineering and rebuilding the systems and subsystems that our bodies and brains comprise. In taking part in this reconstruction, do I lose my self along the way? Again, this issue will transform itself from a centuries-old philosophical dialogue to a pressing practical matter in the next several decades.

  So who am I? Since I am constantly changing, am I just a pattern? What if someone copies that pattern? Am I the original and/or the copy? Perhaps I am this stuff here—that is, the both ordered and chaotic collection of molecules that make up my body and brain.

  But there’s a problem with this position. The specific set of particles that my body and brain comprise are in fact completely different from the atoms and molecules that I comprised only a short while ago. We know that most of our cells are turned over in a matter of weeks, and even our neurons, which persist as distinct cells for a relatively long time, nonetheless change all of their constituent molecules within a month.14 The half-life of a microtubule (a protein filament that provides the structure of a neuron) is about ten minutes. The actin filaments in dendrites are replaced about every forty seconds. The proteins that power the synapses are replaced about every hour. NMDA receptors in synapses stick around for a relatively long five days.

  So I am a completely different set of stuff than I was a month ago, and all that persists is the pattern of organization of that stuff. The pattern changes also, but slowly and in a continuum. I am rather like the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules of water change every millisecond, but the pattern persists for hours or even years.

  Perhaps, therefore, we should say I am a pattern of matter and energy that persists over time. But there is a problem with this definition, as well, since we will ultimately be able to upload this pattern to replicate my body and brain to a sufficiently high degree of accuracy that the copy is indistinguishable from the original. (That is, the copy could pass a “Ray Kurzweil” Turing test.) The copy, therefore, will share my pattern. One might counter that we may not get every detail correct, but as time goes on our attempts to create a neural and body replica will increase in resolution and accuracy at the same exponential pace that governs all information-based technologies. We will ultimately be able to capture and re-create my pattern of salient neural and physical details to any desired degree of accuracy.

  Although the copy shares my pattern, it would be hard to say that the copy is me because I would—or could—still be here. You could even scan and copy me while I was sleeping. If you come to me in the morning and say, “Good news, Ray, we’ve successfully reinstantiated you into a more durable substrate, so we won’t be needing your old body and brain anymore,” I may beg to differ.

  If you do the thought experiment, it’s clear that the copy may look and act just like me, but it’s nonetheless not me. I may not even know that he was created. Although he would have all my memories and recall having been me
, from the point in time of his creation Ray 2 would have his own unique experiences, and his reality would begin to diverge from mine.

  This is a real issue with regard to cryonics (the process of preserving by freezing a person who has just died, with a view toward “reanimating” him later when the technology exists to reverse the damage from the early stages of the dying process, the cryonic-preservation process, and the disease or condition that killed him in the first place). Assuming a “preserved” person is ultimately reanimated, many of the proposed methods imply that the reanimated person will essentially be “rebuilt” with new materials and even entirely new neuromorphically equivalent systems. The reanimated person will, therefore, effectively be “Ray 2” (that is, someone else).

  Now let’s pursue this train of thought a bit further, and you will see where the dilemma arises. If we copy me and then destroy the original, that’s the end of me, because as we concluded above the copy is not me. Since the copy will do a convincing job of impersonating me, no one may know the difference, but it’s nonetheless the end of me.

  Consider replacing a tiny portion of my brain with its neuromorphic equivalent.

  Okay, I’m still here: the operation was successful (incidentally, nanobots will eventually do this without surgery). We know people like this already, such as those with cochlear implants, implants for Parkinson’s disease, and others. Now replace another portion of my brain: okay, I’m still here . . . and again. . . . At the end of the process, I’m still myself. There never was an “old Ray” and a “new Ray.” I’m the same as I was before. No one ever missed me, including me.