4 For example, this approach is used commonly in artificial neural networks: Jacobs, Jordan, Nowlan, and Hinton, “Adaptive mixtures.”
5 Minsky, Society of Mind.
6 Ingle, “Two visual systems,” discussed in a larger framework by Milner and Goodale, The Visual Brain.
7 For the importance of conflict in the brain, see Edelman, Computing the Mind. An optimal brain can be composed of conflicting agents; see Livnat and Pippenger, “An optimal brain”; Tversky and Shafir, “Choice under conflict”; Festinger, Conflict, Decision, and Dissonance. See also Cohen, “The vulcanization,” and McClure et al., “Conflict monitoring.”
8 Miller, “Personality,” as cited in Livnat and Pippenger, “An optimal brain.”
9 For a review of dual-process accounts, see Evans, “Dual-processing accounts.”
10 See Table 1 of ibid.
11 Freud, Beyond the Pleasure Principle (1920). The ideas of his three-part model of the psyche were expanded three years later in his Das Ich und das Es, available in Freud, The Standard Edition.
12 See, for example: Mesulam, Principles of Behavioral and Cognitive neurology; Elliott, Dolan, and Frith, “Dissociable functions”; and Faw, “Pre-frontal executive committee.” There are many subtleties of the neuroanatomy and debates within the field, but these details are not central to my argument and will therefore be relegated to these references.
13 Some authors have referred to these systems, dryly, as System 1 and System 2 processes (see, for example, Stanovich, Who is rational? or Kahneman and Frederick, “Representativeness revisited”). For our purposes, we use what we hope will be the most intuitive (if imperfect) use of emotional and rational systems. This choice is common in the field; see, for example, Cohen, “The vulcanization,” and McClure, et al., “Conflict monitoring.”
14 In this sense, emotional responses can be viewed as information processing—every bit as complex as a math problem but occupied with the internal world rather than the outside. The output of their processing—brain states and bodily responses—can provide a simple plan of action for the organism to follow: do this, don’t do that.
15 Greene, et al., “The neural bases of cognitive conflict.”
16 See Niedenthal, “Embodying emotion,” and Haidt, “The new synthesis.”
17 Frederick, Loewenstein, and O’Donoghue, “Time discounting.”
18 McClure, Laibson, Loewenstein, and Cohen, “Separate neural systems.” Specifically, when choosing longer-term rewards with higher return, the lateral prefrontal and posterior parietal cortices were more active.
19 R. J. Shiller, “Infectious exuberance,” Atlantic Monthly, July/August 2008.
20 Freud, “The future of an illusion,” in The Standard Edition.
21 Illinois Daily Republican, Belvidere, IL, January 2, 1920.
22 Arlie R. Slabaugh, Christmas Tokens and Medals (Chicago: printed by Author, 1966), ANA Library Catalogue No. RM85.C5S5.
23 James Surowiecki, “Bitter money and christmas clubs,” Forbes.com, February 14, 2006.
24 Eagleman, “America on deadline.”
25 Thomas C. Schelling, Choice and Consequence (Cambridge, MA Harvard University Press, 1984); Ryan Spellecy, “Reviving Ulysses contracts,” Kennedy Institute of Ethics Journal 13, no. 4 (2003): 373–92; Namita Puran, “Ulysses contracts: Bound to treatment or free to choose?” York Scholar 2 (2005): 42–51.
26 There is no guarantee that the ethics boards accurately guess at the mental life of the future patient; then again, Ulysses contracts always suffer from imperfect knowledge of the future.
27 This phrase is borrowed from my colleague Jonathan Downar, who put it as “If you can’t rely on your own dorsolateral prefrontal cortex, borrow someone else’s.” As much as I love the original phrasing, I’ve simplified it for the present purposes.
28 For a detailed summary of decades of split-brain studies, see Tramo, et al., “Hemispheric Specialization.” For a lay-audience summary, see Michael Gazzaniga, “The split-brain revisited.”
29 Jaynes, The Origin of Consciousness.
30 See, for example, Rauch, Shin, and Phelps, “Neurocircuitry models.” For an investigation of the relationship between fearful memories and the perception of time, see Stetson, Fiesta, and Eagleman, “Does time really … ?”
31 Here’s another aspect to consider about memory and the ceaseless reinvention hypothesis: neuroscientists do not think of memory as one phenomenon but, instead, as a collection of many different subtypes. On the broadest scale, there is short-term and long-term memory. Short-term involves remembering a phone number long enough to dial it. Within the long-term category there is declarative memory (for example, what you ate for breakfast and what year you got married) and nondeclarative memory (how to ride a bicycle); for an overview, see Eagleman and Montague, “Models of learning.” These divisions have been introduced because patients can sometimes damage one subtype without damaging others—an observation that has led neuroscientists into a hope of categorizing memory into several silos. But it is likely that the final picture of memory won’t divide so neatly into natural categories; instead, as per the theme of this chapter, different memory mechanisms will overlap in their domains. (See, for example, Poldrack and Packard, “Competition,” for a review of separable “cognitive” and “habit” memory systems that rely on the medial temporal lobe and basal ganglia, respectively.) Any circuit that contributes to memory, even a bit, will be strengthened and can make its contribution. If true, this will go some distance toward explaining an enduring mystery to young residents entering the neurology clinic: why do real patient cases only rarely match the textbook case descriptions? Textbooks assume neat categorization, while real brains ceaselessly reinvent overlapping strategies. As a result, real brains are robust—and they are also resistant to humancentric labeling.
32 For a review of different models of motion detection, see Clifford and Ibbotson, “Fundamental mechanisms.”
33 There are many examples of this inclusion of multiple solutions in modern neuroscience. Take, for instance, the motion aftereffect, mentioned in Chapter 2. If you stare at a waterfall for a minute or so, then look away at something else—say, the rocks on the side—it will look as though the stationary rocks are moving upward. This illusion results from an adaptation of the system; essentially, the visual brain realizes that it is deriving little new information from all the downward motion, and it starts to adjust its internal parameters in the direction of canceling out the downwardness. As a result, something stationary now begins to look like it’s moving upward. For decades, scientists debated whether the adaptation happens at the level of the retina, at the early stages of the visual system, or at later stages of the visual system. Years of careful experiments have finally resolved this debate by dissolving it: there is no single answer to the question, because it is ill-posed. There is adaptation at many different levels in the visual system (Mather, Pavan, Campana, and Casco, “The motion aftereffect”). Some areas adapt quickly, some slowly, others at speeds in between. This strategy allows some parts of the brain to sensitively follow changes in the incoming data stream, while others will not change their stubborn ways without lasting evidence. Returning to the issue of memory discussed above, it is also theorized that Mother Nature has found several methods to store memories at several different time scales, and it is the interaction of all these time scales that makes older memories more stable than young memories. The fact that older memories are more stable is known as Ribot’s law. For more on the idea of exploiting different time scales of plasticity, see Fusi, Drew, and Abbott, “Cascade models.”
34 In a wider biological context, the team-of-rivals framework accords well with the idea that the brain is a Darwinian system, one in which stimuli from the outside world happen to resonate with certain random patterns of neural circuitry, and not with others. Those circuits that happen to respond to stimuli in the outside world are strengthened, and other random circuits continue to drift aroun
d until they find something to resonate with. If they never find anything to “excite” them, they die off. To phrase it from the opposite direction, stimuli in the outside world “pick out” circuits in the brain: they happen to interact with some circuits and not others. The team-of-rivals framework is nicely compatible with neural Darwinism, and emphasizes that Darwinian selection of neural circuitry will tend to strengthen multiple circuits—of very different provenance—all of which happen to resonate with a stimulus or task. These circuits are the multiple factions in the brain’s congress. For views on the brain as a Darwinian system, see Gerald Edelman, Neural Darwinism; Calvin, How Brains Think; Dennett, Consciousness Explained; or Hayek, The Sensory Order.
35 See Weiskrantz, “Outlooks” and Blindsight.
36 Technically, reptiles don’t see much outside of the immediate reach of their tongues, unless something is moving wildly. So if you’re resting on a lounge chair ten feet away from a lizard, you most likely don’t exist to him.
37 See, for example, Crick and Koch, “The unconscious homunculus,” for use of the term zombie systems.
38 A recent finding shows that the Stroop effect can disappear following posthypnotic suggestion. Amir Raz and his colleagues selected a pool of hypnotizable subjects using a completely independent test battery. Under hypnosis, subjects were told that in a later task, they would attend to only ink color. Under these conditions, when the subjects were tested, the Stroop interference essentially vanished. Hypnosis is not a phenomenon that is well understood at the level of the nervous system; nor is it understood why some subjects are more hypnotizable than others, and what exactly the role of attention, or of reward patterns, might be in explaining the effects. Nevertheless, the data raise intriguing questions about conflict reduction between internal variables, such as a desire to run versus a desire to stay and fight. See Raz, Shapiro, Fan, and Posner, “Hypnotic suggestion.”
39 Bem, “Self-perception theory”; Eagleman, “The where and when of intention.”
40 Gazzaniga, “The split-brain revisited.”
41 Eagleman, Person, and Montague, “A computational role for dopamine.” In this paper we constructed a model based on the reward systems in the brain, and ran this model on the same computer game. Astoundingly, the simple model captured the important features of the human strategies, which suggested that people’s choices were being driven by surprisingly simple underlying mechanisms.
42 M. Shermer, “Patternicity: Finding meaningful patterns in meaningless noise,” Scientific American, December 2008.
43 For simplicity, I’ve related the random-activity hypothesis of dream content, known technically as the activation-synthesis model (Hobson and McCarley, “The brain as a dream state generator”). In fact, there are many theories of dreaming. Freud suggested that dreams are a disguised attempt at wish fulfillment; however, this may be unlikely in the face of, say, the repetitive dreams of post-traumatic stress disorder. Later, in the 1970s, Jung proposed that dreams are compensating for aspects of the personality neglected in waking life. The problem here is that the themes of dreams seem to be the same everywhere, across cultures and generations—themes such as being lost, preparing meals, or being late for an examination—and it’s a bit difficult to explain what these things have to do with personality neglect. In general, however, I would like to emphasize that despite the popularity of the activation-synthesis hypothesis in neurobiology circles, there is much about dream content that remains deeply unexplained.
44 Crick and Koch, “Constraints.”
45 Tinbergen. “Derived activities.”
46 Kelly, The Psychology of Secrets.
47 Pennebaker, “Traumatic experience”
48 Petrie, Booth, and Pennebaker, “The immunological effects.”
49 To be clear, the team-of-rivals framework, by itself, doesn’t solve the whole AI problem. The next difficulty is in learning how to control the subparts, how to dynamically allocate control to expert subsystems, how to arbitrate battles, how to update the system on the basis of recent successes and failures, how to develop a meta-knowledge of how the parts will act when confronted with temptations in the near future, and so on. Our frontal lobes have developed over millions of years using biology’s finest tricks, and we still have not teased out the riddles of their circuitry. Nonetheless, understanding the correct architecture from the get-go is our best way forward.
Chapter 6. Why Blameworthiness Is the Wrong Question
1 Lavergne, A Sniper in the Tower.
2 Report to Governor, Charles J. Whitman Catastrophe, Medical Aspects, September 8, 1966.
3 S. Brown, and E. Shafer, “An Investigation into the functions of the occipital and temporal lobes of the monkey’s brain,” Philosophical Transactions of the Royal Society of London: Biological Sciences 179 (1888): 303–27.
4 Klüver and Bucy, “Preliminary analysis” This constellation of symptoms, usually accompanied by hypersexuality and hyperorality, is known as Klüver-Bucy syndrome.
5 K. Bucher, R. Myers, and C. Southwick, “Anterior temporal cortex and maternal behaviour in monkey,” Neurology 20 (1970): 415.
6 Burns and Swerdlow, “Right orbitofrontal tumor.”
7 Mendez, et al., “Psychiatric symptoms associated with Alzheimer’s disease”; Mendez, et al., “Acquired sociopathy and frontotemporal dementia.”
8 M. Leann Dodd, Kevin J. Klos, James H. Bower, Yonas E. Geda, Keith A. Josephs, and J. Eric Ahlskog, “Pathological gambling caused by drugs used to treat Parkinson disease,”Archives of Neurology 62, no. 9 (2005): 1377–81.
9 For a solid foundation and clear exposition of the reward systems, see Montague, Your Brain Is (Almost) Perfect.
10 Rutter, “Environmentally mediated risks”; Caspi and Moffitt, “Gene–environment interactions.”
11 The guilty mind is known as mens rea. If you commit the guilty act (actus reus) but did not provably have mens rea, you are not culpable.
12 Broughton, et al., “Homicidal somnambulism.”
13 As of this writing, there have been sixty-eight cases of homicidal somnambulism in North American and European courts, the first one recorded in the 1600s. While we can assume that some fraction of these cases are dishonest pleas, not all of them are. These same considerations of parasomnias have come into courtrooms more recently with sleep sex—for example, rape or infidelity while sleeping—and several cases have been acquitted on these grounds.
14 Libet, Gleason, Wright, and Pearl, “Time”; Haggard and Eimer, “On the relation”; Kornhuber and Deecke, “Changes”; Eagleman, “The where and when of intention”; Eagleman and Holcombe, “Causality”; Soon, et al., “Unconscious determinants of free decisions.”
15 Not everyone agrees that Libet’s simple test constitutes a meaningful test of free will. As Paul McHugh points out, “What else would one expect when studying a capricious act with neither consequence nor significance to the actor?”
16 Remember, criminal behavior is not entirely about the actor’s genes alone. Diabetes and lung disease are influenced by high-sugar foods and elevated air pollution, as well as a genetic predisposition. In the same way, biology and the external environment interact in criminality.
17 Bingham, Preface.
18 See Eagleman and Downar, Cognitive Neuroscience.