*Incidentally, the lobotomy lost favor not so much because of ethical concerns, but because psychoactive drugs came on the market at the beginning in the 1950s, providing a more expedient approach to the problem.
Life After the Monarchy
“As for men, those myriad little detached ponds with their own swarming corpuscular life, what were they but a way that water has of going about beyond the reach of rivers?”
—Loren Eiseley, “The Flow of the River”, The Immense Journey
FROM DETHRONEMENT TO DEMOCRACY
After Galileo discovered the moons of Jupiter in his homemade telescope in 1610, religious critics decried his new sun-centered theory as a dethronement of man. They didn’t suspect that this was only the first dethronement of several. One hundred years later, the study of sedimentary layers by the Scottish farmer James Hutton toppled the Church’s estimate of the age of the Earth—making it eight hundred thousand times older. Not long afterward, Charles Darwin relegated humans to just another branch in the swarming animal kingdom. At the beginning of the 1900s, quantum mechanics irreparably altered our notion of the fabric of reality. In 1953, Francis Crick and James Watson deciphered the structure of DNA, replacing the mysterious ghost of life with something that we can write down in sequences of four letters and store in a computer.
And over the past century, neuroscience has shown that the conscious mind is not the one driving the boat. A mere four hundred years after our fall from the center of universe, we have experienced the fall from the center of ourselves. In the first chapter we saw that conscious access to the machinery under the hood is slow, and often doesn’t happen at all. We then learned that the way we see the world is not necessarily what’s out there: vision is a construction of the brain, and its only job is to generate a useful narrative at our scales of interactions (say, with ripe fruits, bears, and mates). Visual illusions reveal a deeper concept: that our thoughts are generated by machinery to which we have no direct access. We saw that useful routines become burned down into the circuitry of the brain, and that once they are there, we no longer have access to them. Instead, consciousness seems to be about setting goals for what should be burned into the circuitry, and it does little beyond that. In Chapter 5 we learned that minds contain multitudes, which explains why you can curse at yourself, laugh at yourself, and make contracts with yourself. And in Chapter 6 we saw that brains can operate quite differently when they are changed by strokes, tumors, narcotics, or any variety of events that alter the biology. This agitates our simple notions of blameworthiness.
In the wake of all the scientific progress, a troubling question has surfaced in the minds of many: what is left for humans after all these dethronements? For some thinkers, as the immensity of the universe became more apparent, so did humankind’s inconsequentiality—we began to dwindle in importance virtually to the vanishing point. It became clear that the epochal time scales of civilizations represented only a flash in the long history of multicellular life on the planet, and the history of life is only a flash in the history of the planet itself. And that planet, in the vastness of the universe, is only a tiny speck of matter floating away from other specks at cosmic speed through the desolate curvature of space. Two hundred million years from now, this vigorous, productive planet will be consumed in the expansion of the sun. As Leslie Paul wrote in Annihilation of Man:
All life will die, all mind will cease, and it will all be as if it had never happened. That, to be honest, is the goal to which evolution is traveling, that is the “benevolent” end of the furious living and furious dying.… All life is no more than a match struck in the dark and blown out again. The final result … is to deprive it completely of meaning.1
After building many thrones and falling from all of them, man looked around; he wondered whether he had accidentally been generated in a blind and purposeless cosmic process, and he strove to salvage some sort of purpose. As the theologian E. L. Mascall wrote:
The difficulty which civilized Western man in the world today experiences is in convincing himself that he has any special assigned status in the universe.… Many of the psychological disorders which are so common and distressing a feature of our time are, I believe, to be traced to this cause.2
Philosophers such as Heidegger, Jaspers, Shestov, Kierkegaard, and Husserl all scrambled to address the meaninglessness with which the dethronements seemed to have left us. In his 1942 book Le mythe de Sisyphe, Albert Camus introduced his philosophy of the absurd, in which man searches for meaning in a fundamentally meaningless world. In this context, Camus proposed that the only real question in philosophy is whether or not to commit suicide. (He concluded that one should not commit suicide; instead, one should live to revolt against the absurd life, even though it will always be without hope. It is possible that he was forced to this conclusion because the opposite would have impeded sales of his book unless he followed his own prescription—a tricky catch-22.)
I suggest that the philosophers may have been taking the news of the dethronements a bit too hard. Is there really nothing left for mankind after all these dethronements? The situation is likely to be the opposite: as we plumb further down, we will discover ideas much broader than the ones we currently have on our radar screens, in the same way that we have begun to discover the gorgeousness of the microscopic world and the incomprehensible scale of the cosmos. The act of dethronement tends to open up something bigger than us, ideas more wonderful than we had originally imagined. Each discovery taught us that reality far outstrips human imagination and guesswork. These advances deflated the power of intuition and tradition as an oracle of our future, replacing them with more productive ideas, bigger realities, and new levels of awe.
In the case of Galileo’s discovery that we are not at the center of the universe, we now know something much greater: that our solar system is one of billions of trillions. As I mentioned earlier, even if life emerges only on one planet in a billion, it means there may be millions and millions of planets teeming with activity in the cosmos. To my mind, that’s a bigger and brighter idea than sitting at a lonely center surrounded by cold and distant astral lamps. The dethronement led to a richer, deeper understanding, and what we lost in egocentrism was counterbalanced in surprise and wonder.
Similarly, understanding the age of the Earth opened previously unimaginable time vistas, which in turn opened the possibility of understanding natural selection. Natural selection is used daily in laboratories around the globe to select colonies of bacteria in research to combat disease. Quantum mechanics has given us the transistor (the heart of our electronics industry), lasers, magnetic resonance imaging, diodes, and memory in USB flash drives—and may soon deliver the revolutions of quantum computing, tunneling, and teleportation. Our understanding of DNA and the molecular basis of inheritance has allowed us to target disease in ways that were unimaginable a half century ago. By taking seriously the discoveries of science, we have eradicated smallpox, traveled to the moon, and launched the information revolution. We have tripled life spans, and by targeting diseases at the molecular level, we will soon float the average life span beyond one hundred years. Dethronements often equal progress.
In the case of the dethronement of the conscious mind, we gain better inroads to understand human behavior. Why do we find things beautiful? Why are we bad at logic? Who’s cursing at whom when we get mad at ourselves? Why do people fall for the allure of adjustable-rate mortgages? How can we steer a car so well but find ourselves unable to describe the process?
This improved understanding of human behavior can translate directly into improved social policy. As one example, an understanding of the brain matters for structuring incentives. Recall the fact from Chapter 5 that people negotiate with themselves, making an endless series of Ulysses contracts. This leads to ideas like the proposed diet plan from that chapter: people who want to lose weight can deposit a good deal of money into an escrow holding. If they meet their weight-loss goal by a specified deadline, t
hey get the money back; otherwise they lose it all. This structure allows people in a moment of sober reflection to recruit support against their short-term decision making—after all, they know that their future self will be tempted to eat with impunity. Understanding this aspect of human nature allows this sort of contract to be usefully introduced in various settings—for example, getting an employee to siphon a little portion of his monthly paycheck into an individual retirement account. By making the decision up front, he can avoid the temptation of spending later.
Our deeper understanding of the inner cosmos also gives us a clearer view of philosophical concepts. Take virtue. For millennia, philosophers have been asking what it is and what we can do to enhance it. The team-of-rivals framework gives new inroads here. We can often interpret the rivalrous elements in the brain as analogous to engine and brakes: some elements are driving you toward a behavior while others are trying to stop you. At first blush, one might think virtue consists of not wanting to do bad things. But in a more nuanced framework, a virtuous person can have strong lascivious drives as long as he also commands sufficient braking power to surmount them. (It is also the case that a virtuous actor can have minimal temptations and therefore no requirement for good brakes, but one could suggest that the more virtuous person is he who has fought a stronger battle to resist temptation rather than he who was never enticed.) This sort of approach is possible only when we have a clear view of the rivalry under the hood, and not if we believe people possess only a single mind (as in mens rea, “the guilty mind”). With the new tools, we can consider a more nuanced battle between different brain regions and how the battle tips. And that opens up new opportunities for rehabilitation in our legal system: when we understand how the brain is really operating and why impulse control fails in some fraction of the population, we can develop direct new strategies to strengthen long-term decision making and tip the battle in its favor.
Additionally, an understanding of the brain has the potential to elevate us to a more enlightened system of sentencing. As we saw in the previous chapter, we will be able to replace the problematic concept of blameworthiness with a practical, future-looking corrections system (What is this person likely to do from here?) instead of a retrospective one (How much was it his fault?). Someday the legal system may be able to approach neural and behavioral problems in the same manner that medicine studies lung or bone problems. Such biological realism will not clear criminals, but instead will introduce rational sentencing and customized rehabilitation by adopting a prospective approach instead of a retrospective one.
A better understanding of neurobiology may lead to better social policy. But what does it mean for understanding our own lives?
KNOWING THYSELF
“Know then thyself, presume not God to scan. The proper study of mankind is man.”
—Alexander Pope
On February 28, 1571, on the morning of his thirty-eighth birthday, the French essayist Michel de Montaigne decided to make a radical change in his life’s trajectory. He quit his career in public life, set up a library with one thousand books in a tower at the back of his large estate, and spent the rest of his life writing essays about the complex, fleeting, protean subject that interested him the most: himself. His first conclusion was that a search to know oneself is a fool’s errand, because the self continuously changes and keeps ahead of a firm description. That didn’t stop him from searching, however, and his question has resonated through the centuries: Que sais-je? (What do I know?)
It was, and remains, a good question. Our exploration of the inner cosmos certainly disabuses us of our initial, uncomplicated, intuitive notions of knowing ourselves. We see that self-knowledge requires as much work from the outside (in the form of science) as from the inside (introspection). This is not to say that we cannot grow better at introspection. After all, we can learn to pay attention to what we’re really seeing out there, as a painter does, and we can attend more closely to our internal signals, as a yogi does. But there are limits to introspection. Just consider the fact that your peripheral nervous system employs one hundred million neurons to control the activities in your gut (this is called the enteric nervous system). One hundred million neurons, and no amount of your introspection can touch this. Nor, most likely, would you want it to. It’s better off running as the automated, optimized machinery that it is, routing food along your gut and providing chemical signals to control the digestion factory without asking your opinion on the matter.
Beyond lack of access, there could even be prevention of access. My colleague Read Montague once speculated that we might have algorithms that protect us from ourselves. For example, computers have boot sectors which are inaccessible by the operating system—they are too important for the operation of the computer for any other higher level systems to find inroads and gain admission, under any circumstances. Montague noted that whenever we try to think about ourselves too much, we tend to “blink out”—and perhaps this is because we are getting too close to the boot sector. As Ralph Waldo Emerson wrote over a century earlier, “Everything intercepts us from ourselves.”
Much of who we are remains outside our opinion or choice. Imagine trying to change your sense of beauty or attraction. What would happen if society asked you to develop and maintain an attraction to someone of the gender to which you are currently not attracted? Or someone well outside the age range to which you are currently attracted? Or outside your species? Could you do it? Doubtful. Your most fundamental drives are stitched into the fabric of your neural circuitry, and they are inaccessible to you. You find certain things more attractive than others, and you don’t know why.
Like your enteric nervous system and your sense of attraction, almost the entirety of your inner universe is foreign to you. The ideas that strike you, your thoughts during a daydream, the bizarre content of your nightdreams—all these are served up to you from unseen intracranial caverns.
So what does all of this mean for the Greek admonition γνẃθισεαυτόν—know thyself—inscribed prominently in the forecourt of the Temple of Apollo at Delphi? Can we ever know ourselves more deeply by studying our neurobiology? Yes, but with some caveats. In the face of the deep mysteries presented by quantum physics, the physicist Niels Bohr once suggested that an understanding of the structure of the atom could be accomplished only by changing the definition “to understand.” One could no longer draw pictures of an atom, true, but instead one could now predict experiments about its behavior out to fourteen decimal places. Lost assumptions were replaced by something richer.
By the same token, to know oneself may require a change of definition of “to know.” Knowing yourself now requires the understanding that the conscious you occupies only a small room in the mansion of the brain, and that it has little control over the reality constructed for you. The invocation to know thyself needs to be considered in new ways.
Let’s say you wanted to know more about the Greek idea of knowing thyself, and you asked me to explain it further. It probably wouldn’t be helpful if I said, “Everything you need to know is in the individual letters: γ ν ẃ θ ι σ ε α υ τ ό ν.” If you don’t read Greek, the elements are nothing but arbitrary shapes. And even if you do read Greek, there’s so much more to the idea than the letters—instead you would want to know the culture from which it sprung, the emphasis on introspection, the suggestion of a path to enlightenment.3 Understanding the phrase requires more than learning the letters. And this is the situation we’re in when we look at trillions of neurons and their sextillions of voyaging proteins and biochemicals. What does it mean to know ourselves from that totally unfamiliar perspective? As we will see in a moment, we need the neurobiological data, but we also need quite a bit more to know ourselves.
Biology is a terrific approach, but it’s limited. Consider lowering a medical scope down your lover’s throat while he or she reads poetry to you. Get a good, close-up view of your lover’s vocal chords, slimy and shiny, contracting in and out in
spasms. You could study this until you were nauseated (maybe sooner rather than later, depending on your tolerance for biology), but it would get you no closer to understanding why you love nighttime pillow talk. By itself, in its raw form, the biology gives only partial insight. It’s the best we can do right now, but it’s far from complete. Let’s turn to this in more detail now.
WHAT IT DOES AND DOESN’T MEAN TO BE CONSTRUCTED OF PHYSICAL PARTS
One of the most famous examples of brain damage comes from a twenty-five-year-old work-gang foreman named Phineas Gage. The Boston Post reported on him in a short article on September 21, 1848, under the headline “Horrible Accident”:
As Phineas P. Gage, a foreman on the railroad in Cavendish, was yesterday engaged in tamping for a blast, the powder exploded, carrying an instrument through his head an inch and a fourth in [diameter], and three feet and [seven] inches in length, which he was using at the time. The iron entered on the side of his face, shattering the upper jaw, and passing back of the left eye, and out at the top of the head.
The iron tamping rod clattered to the ground twenty-five yards away. While Gage wasn’t the first to have his skull punctured and a portion of his brain spirited away by a projectile, he was the first to not die from it. In fact, Gage did not even lose consciousness.
The first physician to arrive, Dr. Edward H. Williams, did not believe Gage’s statement of what had just happened, but instead “thought he [Gage] was deceived.” But Williams soon understood the gravity of what had happened when “Mr. G. got up and vomited; the effort of vomiting pressed out about half a teacupful of the brain, which fell upon the floor.”