Page 15 of Incognito


  But shouldn’t the contradicting evidence alert these people to a problem? After all, the patient wants to move his hand, but it is not moving. He wants to clap, but he hears no sound. It turns out that alerting the system to contradictions relies critically on particular brain regions—and one in particular, called the anterior cingulate cortex. Because of these conflict-monitoring regions, incompatible ideas will result in one side or another winning out: a story will be constructed that either makes them compatible or ignores one side of the debate. In special circumstances of brain damage, this arbitration system can be damaged—and then conflict can cause no trouble to the conscious mind. This situation is illustrated by a woman I’ll call Mrs. G., who had suffered quite a bit of damage to her brain tissue from a recent stroke. At the time I met her, she was recovering in the hospital with her husband by her bedside, and seemed generally in good health and spirits. My colleague Dr. Karthik Sarma had noticed the night before that when he asked her to close her eyes, she would close only one and not the other. So he and I went to examine this more carefully.

  When I asked her to close her eyes, she said “Okay,” and closed one eye, as in a permanent wink.

  “Are your eyes closed?” I asked.

  “Yes,” she said.

  “Both eyes?”

  “Yes.”

  I held up three fingers. “How many fingers am I holding up, Mrs. G.?”

  “Three,” she said.

  “And your eyes are closed?”

  “Yes.”

  In a nonchallenging way I said, “Then how did you know how many fingers I was holding up?”

  An interesting silence followed. If brain activity were audible, this is when we would have heard different regions of her brain battling it out. Political parties that wanted to believe her eyes were closed were locked in a filibuster with parties that wanted the logic to work out: Don’t you see that we can’t have our eyes closed and be able to see out there? Often these battles are quickly won by the party with the most reasonable position, but this does not always happen with anosognosia. The patient will say nothing and will conclude nothing—not because she is embarrassed, but because she is simply locked up on the issue. Both parties fatigue to the point of attrition, and the original issue being fought over is finally dumped. The patient will conclude nothing about the situation. It is amazing and disconcerting to witness.

  I was struck with an idea. I wheeled Mrs. G. to a position just in front of the room’s only mirror and asked if she could see her own face. She said yes. I then asked her to close both her eyes. Again she closed one eye and not the other.

  “Are both your eyes closed?”

  “Yes.”

  “Can you see yourself?”

  “Yes.”

  Gently I said, “Does it seem possible to see yourself in the mirror if both your eyes are closed?”

  Pause. No conclusion.

  “Does it look to you like one eye is closed or that both are closed?”

  Pause. No conclusion.

  She was not distressed by the questions; nor did they change her opinion. What would have been a checkmate in a normal brain proved to be a quickly forgotten game in hers.

  Cases like Mrs. G.’s allow us to appreciate the amount of work that needs to happen behind the scenes for our zombie systems to work together smoothly and come to an agreement. Keeping the union together and making a good narrative does not happen for free—the brain works around the clock to stitch together a pattern of logic to our daily lives: what just happened and what was my role in it? Fabrication of stories is one of the key businesses in which our brains engage. Brains do this with the single-minded goal of getting the multifaceted actions of the democracy to make sense. As the coin puts it, E pluribus unum: out of many, one.

  * * *

  Once you have learned how to ride a bicycle, the brain does not need to cook up a narrative about what your muscles are doing; instead, it doesn’t bother the conscious CEO at all. Because everything is predictable, no story is told; you are free to think of other issues as you pedal along. The brain’s storytelling powers kick into gear only when things are conflicting or difficult to understand, as for the split-brain patients or anosognosics like Justice Douglas.

  In the mid-1990s my colleague Read Montague and I ran an experiment to better understand how humans make simple choices. We asked participants to choose between two cards on a computer screen, one labeled A and the other labeled B. The participants had no way of knowing which was the better choice, so they picked arbitrarily at first. Their card choice gave them a reward somewhere between a penny and a dollar. Then the cards were reset and they were asked to choose again. Picking the same card produced a different reward this time. There seemed to be a pattern to it, but it was very difficult to detect. What the participants didn’t know was that the reward in each round was based on a formula that incorporated the history of their previous forty choices—far too difficult for the brain to detect and analyze.

  The interesting part came when I interviewed the players afterward. I asked them what they’d done in the gambling game and why they’d done it. I was surprised to hear all types of baroque explanations, such as “The computer liked it when I switched back and forth” and “The computer was trying punish me, so I switched my game plan.” In reality, the players’ descriptions of their own strategies did not match what they had actually done, which turned out to be highly predictable.41 Nor did their descriptions match the computer’s behavior, which was purely formulaic. Instead, their conscious minds, unable to assign the task to a well-oiled zombie system, desperately sought a narrative. The participants weren’t lying; they were giving the best explanation they could—just like the split-brain patients or the anosognosics.

  Minds seek patterns. In a term introduced by science writer Michael Shermer, they are driven toward “patternicity”—the attempt to find structure in meaningless data.42 Evolution favors pattern seeking, because it allows the possibility of reducing mysteries to fast and efficient programs in the neural circuitry.

  To demonstrate patternicity, researchers in Canada showed subjects a light that flashed on and off randomly and asked them to choose which of two buttons to press, and when, in order to make the blinking more regular. The subjects tried out different patterns of button pressing, and eventually the light began to blink regularly. They had succeeded! Now the researchers asked them how they’d done it. The subjects overlaid a narrative interpretation about what they’d done, but the fact is that their button pressing was wholly unrelated to the behavior of the light: the blinking would have drifted toward regularity irrespective of what they were doing.

  For another example of storytelling in the face of confusing data, consider dreams, which appear to be an interpretative overlay to nighttime storms of electrical activity in the brain. A popular model in the neuroscience literature suggests that dream plots are stitched together from essentially random activity: discharges of neural populations in the midbrain. These signals tickle into existence the simulation of a scene in a shopping mall, or a glimpse of recognition of a loved one, or a feeling of falling, or a sense of epiphany. All these moments are dynamically woven into a story, and this is why after a night of random activity you wake up, roll over to your partner, and feel as though you have a bizarre plot to relate. Ever since I was a child, I have been consistently amazed at how characters in my dreams possess such specific and peculiar details, how they come up with such rapid answers to my questions, how they produce such surprising dialogue and such inventive suggestions—all manner of things I would not have invented “myself.” Many times I’ve heard a new joke in a dream, and this impressed me greatly. Not because the joke was so funny in the sober light of day (it wasn’t) but because the joke was not one I could believe that I would have thought of. But, at least presumably, it was my brain and no one else’s cooking up these interesting plotlines.43 Like the split-brain patients or Justice Douglas, dreams illustrate our skills at s
pinning a single narrative from a collection of random threads. Your brain is remarkably good at maintaining the glue of the union, even in the face of thoroughly inconsistent data.

  WHY DO WE HAVE CONSCIOUSNESS AT ALL?

  Most neuroscientists study animal models of behavior: how a sea slug withdraws from a touch, how a mouse responds to rewards, how an owl localizes sounds in the dark. As these circuits are scientifically brought to light, they all reveal themselves to be nothing but zombie systems: blueprints of circuitry that respond to particular inputs with appropriate outputs. If our brains were composed only of these patterns of circuits, why would it feel like anything to be alive and conscious? Why wouldn’t it feel like nothing—like a zombie?

  A decade ago, neuroscientists Francis Crick and Christof Koch asked, “Why does not our brain consist simply of a series of specialized zombie systems?”44 In other words, why are we conscious of anything at all? Why aren’t we simply a vast collection of these automated, burned-down routines that solve problems?

  Crick and Koch’s answer, like mine in the previous chapters, is that consciousness exists to control—and to distribute control over—the automated alien systems. A system of automated subroutines that reaches a certain level of complexity (and human brains certainly qualify) requires a high-level mechanism to allow the parts to communicate, dispense resources, and allocate control. As we saw earlier with the tennis player trying to learn how to serve, consciousness is the CEO of the company: he sets the higher-level directions and assigns new tasks. We have learned in this chapter that he doesn’t need to understand the software that each department in the organization uses; nor does he need to see their detailed logbooks and sales receipts. He merely needs to know whom to call on when.

  As long as the zombie subroutines are running smoothly, the CEO can sleep. It is only when something goes wrong (say, all the departments suddenly find that their business models have catastrophically failed) that the CEO is rung up. Think about when your conscious awareness comes online: in those situations where events in the world violate your expectations. When everything is going according to the needs and skills of your zombie systems, you are not consciously aware of most of what’s in front of you; when suddenly they cannot handle the task, you become consciously aware of the problem. The CEO scrambles around, looking for fast solutions, dialing up everyone to find who can address the problem best.

  The scientist Jeff Hawkins offers a nice example of this: after he entered his home one day, he realized that he had experienced no conscious awareness of reaching for, grasping, and turning the doorknob. It was a completely robotic, unconscious action on his part—and this was because everything about the experience (the doorknob’s feel and location, the door’s size and weight, and so on) was already burned down into unconscious circuitry in his brain. It was expected, and therefore required no conscious participation. But he realized that if someone were to sneak over to his house, drill the doorknob out, and replace it three inches to the right, he would notice immediately. Instead of his zombie systems getting him directly into his house with no alerts or concerns, suddenly there would be a violation of expectations—and consciousness would come online. The CEO would rouse, turn on the alarms, and try to figure out what might have happened and what should be done next.

  If you think you’re consciously aware of most of what surrounds you, think again. The first time you make the drive to your new workplace, you attend to everything along the way. The drive seems to take a long time. By the time you’ve made the drive many times, you can get yourself there without much in the way of conscious deliberation. You are now free to think about other things; you feel as though you’ve left home and arrived at work in the blink of an eye. Your zombie systems are experts at taking care of business as usual. It is only when you see a squirrel in the road, or a missing stop sign, or an overturned vehicle on the shoulder that you become consciously aware of your surroundings.

  All of this is consistent with a finding we learned two chapters ago: when people play a new video game for the first time, their brains are alive with activity. They are burning energy like crazy. As they get better at the game, less and less brain activity is involved. They have become more energy efficient. If you measure someone’s brain and see very little activity during a task, it does not necessarily indicate that they’re not trying—it more likely signifies that they have worked hard in the past to burn the programs into the circuitry. Consciousness is called in during the first phase of learning and is excluded from the game playing after it is deep in the system. Playing a simple video game becomes as unconscious a process as driving a car, producing speech, or performing the complex finger movements required for tying a shoelace. These become hidden subroutines, written in an undeciphered programming language of proteins and neurochemicals, and there they lurk—for decades sometimes—until they are next called upon.

  From an evolutionary point of view, the purpose of consciousness seems to be this: an animal composed of a giant collection of zombie systems would be energy efficient but cognitively inflexible. It would have economical programs for doing particular, simple tasks, but it wouldn’t have rapid ways of switching between programs or setting goals to become expert in novel and unexpected tasks. In the animal kingdom, most animals do certain things very well (say, prying seeds from the inside of a pine cone), while only a few species (such as humans) have the flexibility to dynamically develop new software.

  Although the ability to be flexible sounds better, it does not come for free—the trade-off is a burden of lengthy childrearing. To be flexible like an adult human requires years of helplessness as an infant. Human mothers typically bear only one child at a time and have to provide a period of care that is unheard-of (and impracticable) in the rest of the animal kingdom. In contrast, animals that run only a few very simple subroutines (such as “Eat foodlike things and shrink away from looming objects”) adopt a different rearing strategy, usually something like “Lay lots of eggs and hope for the best.” Without the ability to write new programs, their only available mantra is: If you can’t outthink your opponents, outnumber them.

  So are other animals conscious? Science currently has no meaningful way to make a measurement to answer that question—but I offer two intuitions. First, consciousness is probably not an all-or-nothing quality, but comes in degrees. Second, I suggest that an animal’s degree of consciousness will parallel its intellectual flexibility. The more subroutines an animal possesses, the more it will require a CEO to lead the organization. The CEO keeps the subroutines unified; it is the warden of the zombies. To put this another way, a small corporation does not require a CEO who earns three million dollars a year, but a large corporation does. The only difference is the number of workers the CEO has to keep track of, allocate among, and set goals for.**

  If you put a red egg in the nest of a herring gull, it goes berserk. The color red triggers aggression in the bird, while the shape of the egg triggers brooding behavior—as a result, it tries to simultaneously attack the egg and incubate it.45 It’s running two programs at once, with an unproductive end result. The red egg sets off sovereign and conflicting programs, wired into the gull’s brain like competing fiefdoms. The rivalry is there, but the bird has no capacity to arbitrate in the service of smooth cooperation. Similarly, if a female stickleback trespasses onto a male’s territory, the male will display attack behavior and courtship behavior simultaneously, which is no way to win over a lady. The poor male stickleback appears to be simply a bundled collection of zombie programs triggered by simple lock-and-key inputs (Trespass! Female!), and the subroutines have not found any method of arbitration between them. This seems to me to suggest that the herring gull and the stickleback are not particularly conscious.

  I propose that a useful index of consciousness is the capacity to successfully mediate conflicting zombie systems. The more an animal looks like a jumble of hardwired input–output subroutines, the less it gives evidence of consciousnes
s; the more it can coordinate, delay gratification, and learn new programs, the more conscious it may be. If this view is correct, in the future a battery of tests might be able to yield a rough measure of a species’ degree of consciousness. Think back to the befuddled rat we met near the beginning of the chapter, who, trapped between the drive to go for the food and the impulse to run from the shock, became stuck in between and oscillated back and forth. We all know what it’s like to have moments of indecision, but our human arbitration between the programs allows us to escape these conundrums and make a decision. We quickly find ways of cajoling or castigating ourselves toward one outcome or the other. Our CEO is sophisticated enough to get us out of the simple lockups that can thoroughly hamstring the poor rat. This may be the way in which our conscious minds—which play only a small part in our total neural function—really shine.

  THE MULTITUDES

 
David Eagleman's Novels