94. Bowles, 2006,2008, 2009.

  95. Churchland, 2008a.

  96. Libet, Gleason, Wright, & Pearl, 1983.

  97. Soon, Brass, Heinze, & Haynes, 2008. Libet later argued that while we don't have free will with respect to initiating behavior, we might have free will to veto an intention before it becomes effective (Libet, 1999, 2003). I think his reasoning was clearly flawed, as there is every reason to think that a conscious veto must also arise on the basis of unconscious neural events.

  98. Fisher, 2001; Wegner, 2002; Wegner, 2004.

  99. Heisenberg, 2009; Kandel, 2008; Karczmar, 2001; Libet, 1999; McCrone, 2003; Planck & Murphy, 1932; Searle, 2001; Sperry, 1976.

  100. Heisenberg, 2009.

  101. One problem with this approach is that quantum mechanical effects are probably not, as a general rule, biologically salient. Quantum effects do drive evolution, as high-energy particles like cosmic rays cause point mutations in DNA, and the behavior of such particles passing through the nucleus of a cell is governed by the laws of quantum mechanics. Evolution, therefore, seems unpredictable in principle (Silver, 2006).

  102. The laws of nature do not strike most of us as incompatible with free will because we have not imagined how human action would appear if all cause-and-effect relationships were understood. But imagine that a mad scientist has developed a means of controlling the human brain at a distance: What would it be like to watch him send a person to and fro on the wings of her "will"? Would there be even the slightest temptation to impute freedom to her? No. But this mad scientist is nothing more than causal determinism personified. What makes his existence so inimical to our notion of free will is that when we imagine him lurking behind a person's thoughts and actions—tweaking electrical potentials, manufacturing neurotransmitters, regulating genes, etc.—we cannot help but let our notions of freedom and responsibility travel up the puppet's strings to the hand that controls them. To see that the addition of randomness does nothing to change this situation, we need only imagine the scientist basing the inputs to his machine on a shrewd arrangement of roulette wheels. How would such unpredictable changes in the states of a person's brain constitute freedom?

  Swapping any combination of randomness and natural law for a mad scientist, we can see that all the relevant features of a person's inner life would be conserved— thoughts, moods, and intentions would still arise and beget actions—and yet we are left with the undeniable fact that the conscious mind cannot be the source of its own thoughts and intentions. This discloses the real mystery of free will: if our experience is compatible with its utter absence, how can we say that we see any evidence for it in the first place?

  103. Dennett, 2003.

  104. The phrase "alien hand syndrome" describes a variety of neurological disorders in which a person no longer recognizes ownership of one of his hands. Actions of the nondominant hand in the split-brain patient can have this character, and in the acute phase after surgery this can lead to overt, intermanual conflict. Zaidel et al. (2003) prefer the phrase "autonomous hand," as patients typically experience their hand to be out of control but do nor ascribe ownership of it to someone else. Similar anomalies can be attributed to other neurological causes: for instance, in sensory alien hand syndrome (following a stroke in the right posterior cerebral artery) the right arm will sometimes choke or otherwise attack the left side of the body (Pryse-Philips, 2003).

  105. See S. Harris, 2004, pp. 272-274.

  106. Burns & Bechara, 2007, p. 264.

  107. Others have made a similar argument. See Burns & Bechara, 2007, p. 264; J. Greene & Cohen, 2004, p. 1776.

  108. Cf. Levy, 2007.

  109. The neuroscientist Michael Gazzaniga writes:

  Neuroscience will never find the brain correlate of responsibility, because that is something we ascribe to humans—to people—not to brains. It is a moral value we demand of our fellow, rule-following human beings. Just as optometrists can tell us how much vision a person has (20/20 or 20/200) but cannot tell us when someone is legally blind or has too little vision to drive a school bus, so psychiatrists and brain scientists might be able to tell us what someone's mental state or brain condition is but cannot tell us (without being arbitrary) when someone has too little control to be held responsible. The issue of responsibility (like the issue of who can drive school buses) is a social choice. In neuroscientific terms, no person is more or less responsible than any other for actions. We are all part of a deterministic system that someday, in theory, we will completely understand. Yet the idea of responsibility, a social construct that exists in the rules of a society, does not exist in the neuronal structures of the brain (Gazzaniga, 2005, pp. 101-102).

  While it is true that responsibility is a social construct attributed to people and not to brains, it is a social construct that can make more or less sense given certain facts about a person's brain. I think we can easily imagine discoveries in neuroscience, as well as brain imaging technology, that would allow us to attribute responsibility to persons in a far more precise way than we do at present. A "Twinkie defense" would be entirely uncontroversial if we learned that there was something in the creamy center of every Twinkie that obliterated the frontal lobe's inhibitory control over the limbic system.

  But perhaps "responsibility" is simply the wrong construct: for Gazzaniga is surely correct to say that "in neuroscientific terms, no person is more or less responsible than any other for actions." Conscious actions arise on the basis of neural events of which we are not conscious. Whether they are predictable or not, we do not cause our causes.

  110. Diamond, 2008.

  111. In the philosophical literature, one finds three approaches to the problem: determinism, libertarianism, and compatibilism. Both determinism and libertari-anism are often referred to as "incompatibilist" views, in that both maintain that if our behavior is fully determined by background causes, free will is an illusion. Determinists believe that we live in precisely such a world; libertarians (no relation to the political view that goes by this name) believe that our agency rises above the field of prior causes—and they inevitably invoke some metaphysical entity, like a soul, as the vehicle for our freely acting wills. Compatibilists, like Daniel Dennett, maintain that free will is compatible with causal determinism (see Dennett, 2003; for other compatibilist arguments see Ayer, Chisholm, Strawson, Frankfurt, Dennett, and Watson—all in Watson, 1982). The problem with compatibilism, as I see it, is that it tends to ignore that people's moral intuitions are driven by deeper, metaphysical notions of free will. That is, the free will that people presume for themselves and readily attribute to others (whether or not this freedom is, in Dennett's sense, "worth wanting") is a freedom that slips the influence of impersonal, background causes. The moment you show that such causes are effective—as any detailed account of the neurophysiology of human thought and behavior would— proponents of free will can no longer locate a plausible hook upon which to hang their notions of moral responsibility. The neuroscientists Joshua Greene and Jonathan Cohen make the same point:

  Most peoples view of the mind is implicitly dualist and libertarian and not materialist and compatibilist... [I]ntuitive free will is libertarian, not com-patibilist. That is, it requires the rejection of determinism and an implicit commitment to some kind of magical mental causation ... contrary to legal and philosophical orthodoxy, determinism really does threaten free will and responsibility as we intuitively understand them (J. Greene & Cohen, 2004, pp. 1779-1780).

  Chapter 3: Belief

  1. Brains do not fossilize, so we cannot examine the brains of our ancient ancestors. But comparing the neuroanatomy of living primates offers some indication of the types of physical adaptations that might have led to the emergence of language. For instance, diffusion-tensor imaging of macaque, chimpanzee, and human brains reveals a gradual increase in the connectivity of the arcuate fasciculus—the fiber tract linking the temporal and frontal lobes. This suggests that the relevant adaptations were incremental,
rather than saltatory (Ghazanfar, 2008).

  2. N. Patterson, Richter, Gnerre, Lander, & Reich, 2006, 2008.

  3. Wade, 2006.

  4. Sarmiento, Sawyer, Milner, Deak, &Tattersall, 2007; Wade, 2006.

  5. It seems, however, that the Neanderthal copy of the FOXP2 gene carried the same two crucial mutations that distinguish modern humans from other primates (Enard et al., 2002; Krause et al., 2007). FOXP2 is now known to play a central role in spoken language, and its disruption leads to severe linguistic impairments in otherwise healthy people (Lai, Fisher, Hurst, Vargha-Khadem, & Monaco, 2001). The introduction of a human FOXP2 gene into mice changes their ultrasonic vocalizations, decreases exploratory behavior, and alters cortico-basal ganglia circuits (Enard et al., 2009). The centrality of FOXP2 for language development in humans has led some researchers to conclude that Neanderthals could speak (Yong, 2008). In fact, one could argue that the faculty of speech must precede Homo sapiens, as "it is difficult to imagine the emergence of complex subsistence behaviors and selection for a brain size increase of approximately 75 percent, both since about 800,000 years ago, without complex social communication" (Trinkaus, 2007).

  Whether or not they could speak, the Neanderthals were impressive creatures. Their average cranial capacity was 1,520 cc, slightly larger than that of their Homo sapien contemporaries. In fact, human cranial capacity has decreasedbj about 150 cc over the millennia to its current average of 1,340 cc (Gazzaniga, 2008). Generally speaking, the correlation between brain size and cognitive ability is less than straightforward, as there are several species that have larger brains than we do (e.g., elephants, whales, dolphins) without exhibiting signs of greater intelligence. There have been many efforts to find some neuroanatomical measure that reliably tracks cognitive ability, including allometric brain size (brain size proportional to body mass), "encephalization quotient" (brain size proportional to the expected brain size for similar animals, corrected for body mass; for primates EQ= [brain weight] / [0.12 x body weight 067 ]), the size of the neocortex relative to the rest of the brain, etc. None of these metrics has proved especially useful. In fact, among primates, there is no better predictor of cognitive ability than absolute brain size, irrespective of body mass (Deaner, Isler, Burkart, & van Schaik, 2007). By this measure, our competition with Neanderthals looks especially daunting.

  There are several genes involved in brain development that have been found to be differentially regulated in human beings compared to other primates; two of special interest are microcephalin and ASPM (the abnormal spindlelike microcephaly-associated gene). The modern variant of microcephalin, which regulates brain size, appeared approximately 37,000 years ago (more or less coincident with the ascendance of modern humans) and has increased in frequency under positive selection pressure ever since (P. D. Evans et al., 2005). One modern variant of ASPM, which also regulates brain size, has spread with great frequency in the last 5,800 years (Mekel-Bobrov et al., 2005). As these authors note, this can be loosely correlated with the spread of cities and the development of written language. The possible significance of these findings is also discussed in Gazzaniga (2008).

  6. Fitch, Hauser, & Chomsky, 2005; Hauser, Chomsky, & Fitch, 2002; Pinker &Jackendoff,2005.

  7. Regrettably, language is also the basis of our ability to wage war effectively, to perpetrate genocide, and to render our planet uninhabitable.

  8. While general information sharing has been undeniably useful, there is good reason to think that the communication of specifically social information has driven the evolution of language (Dunbar, 1998, 2003). Humans also transmit social information (i.e., gossip) in greater quantity and with higher fidelity than nonsocial information (Mesoudi, Whiten, & Dunbar, 2006).

  9. Cf. S. Harris, 2004, pp. 243-244.

  10. A. R. Damasio, 1999.

  11. Westbury & Dennett, 1999.

  12. Bransford&McCarrell, 1977.

  13. Rumelhart, 1980.

  14. Damasio draws a similar distinction (A. R. Damasio, 1999).

  15. For the purposes of studying belief in the lab, therefore, there seems to be little problem in defining the phenomenon of interest: believing a proposition is the act of accepting it as "true" (e.g., marking it as "true" on a questionnaire); disbelieving a proposition is the act of rejecting it as "false"; and being uncertain about the truth value of a proposition is the disposition to do neither of these things, but to judge it, rather, as "undecidable."

  In our search for the neural correlates of subjective states like belief and disbelief, we are bound to rely on behavioral reports. Therefore, having presented an experimental subject with a written statement—e.g., the United States is larger than Guatemala —and watched him mark it as "true," it may occur to us to wonder whether we can take him at his word. Does he really believe that the United States is larger than Guatemala? Does this statement, in other words, really seem true to him? This is rather like worrying, with reference to a subject who has just performed a lexical decision task, whether a given stimulus really seems like a word to him. While it may seem reasonable to worry that experimental subjects might be poor judges of what they believe, or that they might attempt to deceive experimenters, such concerns seem misplaced—or if appropriate here, they should haunt all studies of human perception and cognition. As long as we are content to rely on subjects to report their perceptual judgments (about when, or whether, a given stimulus appeared), or their cognitive ones (about what sort of stimulus it was), there seems to be no special problem taking reports of belief, disbelief, and uncertainty at face value. This does not ignore the possibility of deception (or self-deception), implicit cognitive conflict, motivated reasoning, and other sources of confusion.

  16. Blakeslee, 2007.

  17. These considerations run somewhat against David Marr's influential thesis that any complex information-processing system should be understood first at the level of "computational theory" (i.e., the level of highest abstraction) in terms of its "goals" (Marr, 1982). Thinking in terms of goals can be extremely useful, of course, in that it unifies (and ignores) a tremendous amount of bottom-up detail: the goal of "seeing," for instance, is complicated at the level of its neural realization and, what is more, it has been achieved by at least forty separate evolutionary routes (Dawkins, 1996, p. 139). Consequently, thinking about "seeing" in terms of abstract computational goals can make a lot of sense. In a structure like the brain, however, the "goals" of the system can never be fully specified in advance. We currently have no inkling what else a region like the insula might be "for."

  18. There has been a long debate in neuroscience over whether the brain is best thought of as a collection of discrete modules or as a distributed, dynamical system. It seems clear, however, that both views are correct, depending on one's level of focus (J. D. Cohen &Tong, 2001). Some degree of modularity is now an undeniable property of brain organization, as damage to one brain region can destroy a specific ability (e.g., the recognition of faces) while sparing most others. There are also distinct differences in cell types and patterns of connectivity that articulate sharp borders between regions. And some degree of modularity is ensured by limitations on information transfer over large distances in the brain.

  While regional specialization is a general fact of brain organization, strict partitioning generally isn't: as has already been said, most regions of the brain serve multiple functions. And even within functionally specific regions, the boundaries between their current function and their possible functions are provisional, fuzzy, and in the case of any individual brain, guaranteed to be idiosyncratic. For instance, the brain shows a general capacity to recover from focal injuries, and this entails the recruitment and repurposing of other (generally adjacent) brain areas. Such considerations suggest that we cannot expect true isomorphism between brains—or even between a brain and itself across time.

  There is legitimate concern, however, that current methods of neuroimaging tend to beg the question in favor of the modularity
thesis—leading, among uncritical consumers of this research, to a nai've picture of functional segregation in the brain. Consider functional magnetic resonance imaging (fMRI), which is the most popular method of neuroimaging at present. This technique does not give us an absolute measure of neural activity. Rather, it allows us to compare changes in blood flow throughout the brain between two experimental conditions. We can, for example, compare instances in which subjects believe statements to be true to instances in which they believe statements to be false. The resulting image reveals which regions of the brain are more active in one condition or the other. Because fMRI allows us to detect signal changes throughout the brain, it is not, in principle, blind to widely distributed or combinatorial processing. But its dependence on blood flow as a marker for neural activity reduces spatial and temporal resolution, and the statistical techniques we use to analyze our data require that we focus on relatively large clusters of activity. It is, therefore, in the very nature of the tool to deliver images that appear to confirm the modular organization of brain function (cf. Henson, 2005). The problem, as far as critics are concerned, is that this method of studying the brain ignores the fact that the whole brain is active in both experimental conditions (e.g., during belief and disbelief), and regions that don't survive this subtractive procedure may well be involved in the relevant information processing.

  Functional magnetic resonance imaging (fMRI) also rests on the assumption that there is a more or less linear relationship between changes in blood flow, as measured by blood-oxygen-level-dependent (BOLD) changes in the MR signal, and changes in neuronal activity. While the validity of fMRI seems generally well supported (Logothetis, Pauls, Augath, Trinath, & Oeltermann, 2001), there is some uncertainty about whether the assumed linear relationship between blood flow and neuronal activity holds for all mental processes (Sirotin & Das, 2009). There are also potential problems with comparing one brain state to another on the assumption that changes in brain function are additive in the way that the components of an experimental task may be (this is often referred to as the problem of "pure insertion") (Friston et al., 1996). There are also questions about what "activity" is indicated by changes in the BOLD signal. The principal correlate of blood-flow changes in the brain appears to be presynaptic/neuromodulatory activity (as measured by local field potentials), not axonal spikes. This fact poses a few concerns related to the interpretation of fMRI data: fMRI cannot readily differentiate activity that is specific to a given task and neuromodulation; nor can it differentiate bottom-up from top-down processing. In fact, fMRI may be blind to the difference between excitatory and inhibitory signals, as metabolism also increases with inhibition. It seems quite possible, for instance, that increases in recurrent inhibition in a given region might be associated with greater BOLD signal but decreased neuronal firing. For a discussion of these and other limitations of the technology, see Logothetis, 2008; M. S. Cohen, 1996, 2001. Such concerns notwithstanding, fMRI remains the most important tool for studying brain function in human beings noninvasively.