On the flip side, we dislike any factor that stands in the way of that simplicity and causal concreteness. Uncertainty, chance, randomness, nonlinearity: these elements threaten our ability to explain, and to explain quickly and (seemingly) logically. And so, we do our best to eliminate them at every turn. Just like we decide that the last glass of wine to be poured is also most likely to contain all the beeswing if we see glasses of uneven clarity, we may think, to take one example, that someone has a hot hand in basketball if we see a number of baskets in a row (the hot-hand fallacy). In both cases, we are using too few observations to reach our conclusions. In the case of the glasses, we rely only on that bottle and not on the behavior of other similar bottles under various circumstances. In the case of basketball, we rely only on the short streak (the law of small numbers) and not on the variability inherent in any player’s game, which includes long-run streaks. Or, to take another example, we think a coin is more likely to land on heads if it has fallen on tails for a number of times (the gambler’s fallacy), forgetting that short sequences don’t necessarily have to have the fifty-fifty distribution that would appear in the long term.

  Whether we’re explaining why something has happened or concluding as to the likely cause of an event, our intuition often fails us because we prefer things to be much more controllable, predictable, and causally determined than they are in reality.

  From these preferences stem the errors in thinking that we make without so much as a second thought. We tend to deduce as we shouldn’t, arguing, as Holmes would put it, ahead of the data—and often in spite of the data. When things just “make sense” it is incredibly difficult to see them any other way.

  W.J. was a World War II veteran. He was gregarious, charming, and witty. He also happened to suffer from a form of epilepsy so incapacitating that, in 1960, he elected to have a drastic form of brain surgery. The connecting fabric between the left and right hemispheres of the brain that allows the two halves to communicate—his corpus collosum—would be severed. In the past, this form of treatment had been shown to have a dramatic effect on the incidence of seizures. Patients who had been unable to function could all of a sudden lead seizure-free lives. But did such a dramatic change to the brain’s natural connectivity come at a cost?

  At the time of W.J.’s surgery, no one really knew the answer. But Roger Sperry, a neuroscientist at Caltech who would go on to win a Nobel Prize in medicine for his work on hemispheric connectictivity, suspected that it might. In animals, at least, a severing of the corpus collosum meant that the hemispheres became unable to communicate. What happened in one hemisphere was now a complete mystery to the other. Could this effective isolation occur in humans as well?

  The pervasive wisdom was an emphatic no. Our human brains were not animal brains. They were far more complicated, far too smart, far too evolved, really. And what better proof than all of the high-functioning patients who had undergone the surgery. This was no frontal lobotomy. These patients emerged with IQ intact and reasoning abilities aplenty. Their memory seemed unaffected. Their language abilities were normal.

  The resounding wisdom seemed intuitive and accurate. Except, of course, it was resoundingly wrong. No one had ever figured out a way to test it scientifically: it was a Watson just-so story that made sense, founded on the same absence of verified factual underpinnings. Until, that is, the scientific equivalent of Holmes arrived at the scene: Michael Gazzaniga, a young neuroscientist in Sperry’s lab. Gazzaniga found a way to test Sperry’s theory—that a severed corpus collosum rendered the brain hemispheres unable to communicate—with the use of a tachistoscope, a device that could present visual stimuli for specific periods of time, and, crucially, could do this to the right side or the left side of each eye separately. (This lateral presentation meant that any information would go to only one of the two hemispheres.)

  When Gazzaniga tested W.J. after the surgery, the results were striking. The same man who had sailed through his tests weeks earlier could no longer describe a single object that was presented to his left visual field. When Gazzaniga flashed an image of a spoon to the right field, W.J. named it easily, but when the same picture was presented to the left, the patient seemed to have, in essence, gone blind. His eyes were fully functional, but he could neither verbalize nor recall having seen a single thing.

  What was going on? W.J. was Gazzaniga’s patient zero, the first in a long line of initials who all pointed in one direction: the two halves of our brains are not created equal. One half is responsible for processing visual inputs—it’s the one with the little window to the outside world, if you recall the Shel Silverstein image—but the other half is responsible for verbalizing what it knows—it’s the one with the staircase to the rest of the house. When the two halves have been split apart, the bridge that connects the two no longer exists. Any information available to one side may as well not exist as far as the other is concerned. We have, in effect, two separate mind attics, each with its unique storage, contents, and, to some extent, structure.

  And here’s where things get really tricky. If you show a picture of, say, a chicken claw to just the left side of the eye (which means the picture will be processed only by the right hemisphere of the brain—the visual one, with the window) and one of a snowy driveway to just the right side of the eye (which means it will be processed only by the left hemisphere—the one with the communicating staircase), and then ask the individual to point at an image most closely related to what he’s seen, the two hands don’t agree: the right hand (tied to the left input) will point to a shovel, while the left hand (tied to the right input) will point to a chicken. Ask the person why he’s pointing to two objects, and instead of being confused he’ll at once create an entirely plausible explanation: you need a shovel to clean out the chicken coop. His mind has created an entire story, a narrative that will make plausible sense of his hands’ discrepancy, when in reality it all goes back to those silent images.

  Gazzaniga calls the left hemisphere our left-brain interpreter, driven to seek causes and explanations—even for things that may not have them, or at least not readily available to our minds—in a natural and instinctive fashion. But while the interpreter makes perfect sense, he is more often than not flat-out wrong, the Watson of the wineglasses taken to an extreme.

  Split-brain patients provide some of the best scientific evidence of our proficiency at narrative self-deception, at creating explanations that make sense but are in reality far from the truth. But we don’t even need to have our corpus collosum severed to act that way. We do it all the time, as a matter of course. Remember that pendulum study of creativity, where subjects were able to solve the problem after the experimenter had casually set one of the two cords in motion? When subjects were then asked where their insight had come from, they cited many causes. “It was the only thing left.” “I just realized the cord would swing if I fastened a weight to it.” “I thought of the situation of swinging across a river.” “I had imagery of monkeys swinging from trees.”

  All plausible enough. None correct. No one mentioned the experimenter’s ploy. And even when told about it later, over two-thirds continued to insist that they had not noted it and that it had had no impact at all on their own solutions—although they had reached those solutions, on average, within forty-five seconds of the hint. What’s more, even the one-third that admitted the possibility of influence proved susceptible to false explanation. When a decoy cue (twirling the weight on a cord) was presented, which had no impact on the solution, they cited that cue, and not the actual one that helped them, as having prompted their behavior.

  Our minds form cohesive narratives out of disparate elements all the time. We’re not comfortable if something doesn’t have a cause, and so our brains determine a cause one way or the other, without asking our permission to do so. When in doubt, our brains take the easiest route, and they do so at every stage of the reasoning process, from forming inferences to generalizations.

  W.J. is but a more extreme example
of the exact thing that Watson does with the wineglasses. In both instances there is the spontaneous construction of story, and then a firm belief in its veracity, even when it hinges on nothing more than its seeming cohesiveness. That is deductive problem number one.

  Even though all of the material is there for the taking, the possibility of ignoring some of it, knowingly or not, is real. Memory is highly imperfect, and highly subject to change and influence. Even our observations themselves, while accurate enough to begin with, may end up affecting our recall and, hence, our deductive reasoning more than we think. We must be careful lest we let something that caught our attention, whether because it is out of all proportion (salience) or because it just happened (recency) or because we’ve been thinking about something totally unrelated (priming or framing), weigh too heavily in our reasoning and make us forget other details that are crucial for proper deduction. We must also be sure that we answer the same question we posed in the beginning, the one that was informed by our initial goals and motivation, and not one that somehow seems more pertinent or intuitive or easier, now that we’ve reached the end of the thought process. Why do Lestrade and the rest of the detectives so often persist in wrongful arrests, even when all evidence points to the contrary? Why do they keep pushing their original story, as if failing to note altogether that it is coming apart at the seams? It’s simple, really. We don’t like to admit our initial intuition to be false and would much rather dismiss the evidence that contradicts it. It is perhaps why wrongful arrests are so sticky even outside the world of Conan Doyle.

  The precise mistakes or the names we give them don’t matter as much as the broad idea: we often aren’t mindful in our deduction, and the temptation to gloss over and jump to the end becomes ever stronger the closer we get to the finish line. Our natural stories are so incredibly compelling that they are tough to ignore or reverse. They get in the way of Holmes’s dictate of systematized common sense, of going through all alternatives, one by one, sifting the crucial from the incidental, the improbable from the impossible, until we reach the only answer.

  As a simple illustration of what I mean, consider the following questions. I want you to write down the first answer that comes to your mind. Ready?

  1. A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

  2. If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

  3. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

  You have just taken Shane Frederick’s Cognitive Reflection Test (CRT). If you are like most people, chances are you wrote down at least one of the following: $0.10 for question one; 100 minutes for question two; and 24 days for question three. In each case, you would have been wrong. But you would have been wrong in good company. When the questions were asked of Harvard students, the average score was 1.43 correct (with 57 percent of students getting either zero or one right). At Princeton, a similar story: 1.63 correct, and 45 percent scoring zero or one. And even at MIT, the scores were far from perfect: 2.18 correct on average, with 23 percent, or near to a quarter, of students getting either none or one correct. These “simple” problems are not as straightforward as they may seem at first glance.

  The correct answers are $0.05, 5 minutes, and 47 days, respectively. If you take a moment to reflect, you will likely see why—and you’ll say to yourself, Of course, how did I ever miss that? Simple. Good old System Watson has won out once again. The initial answers are the intuitively appealing ones, the ones that come quickly and naturally to mind if we don’t pause to reflect. We let the salience of certain elements (and they were framed to be salient on purpose) draw us away from considering each element fairly and accurately. We use mindless verbatim strategies—repeating an element in the prior answer and not reflecting on the actual best strategy to solve the present problem—instead of mindful ones (in essence, substituting an intuitive question for the more difficult and time-consuming alternative, just because the two happen to seem related). Those second answers require you to suppress System Watson’s eager response and let Holmes take a look: to reflect, inhibit your initial intuition, and then edit it accordingly, which is not something that we are overly eager to do, especially when we are tired from all the thinking that came before. It’s tough to keep that motivation and mindfulness going from start to finish, and far easier to start conserving our cognitive resources by letting Watson take the helm.

  While the CRT may seem far removed from any real problems we might encounter, it happens to be remarkably predictive of our performance in any number of situations where logic and deduction come into play. In fact, this test is often more telling than are measures of cognitive ability, thinking disposition, and executive function. Good performance on these three little questions predicts resistance to a number of common logical fallacies, which, taken together, are considered to predict adherence to the basic structures of rational thought. The CRT even predicts our ability to reason through the type of formal deductive problem—the Socrates one—that we saw earlier in the chapter: if you do poorly on the test, you are more likely to say that if all living things need water and roses need water, it follows that roses are living things.

  Jumping to conclusions, telling a selective story instead of a logical one, even with all of the evidence in front of you and well sorted, is common (though avoidable, as you’ll see in just a moment). Reasoning through everything up until the last moment, not letting those mundane details bore you, not letting yourself peter out toward the end of the process: that is altogether rare. We need to learn to take pleasure in the lowliest manifestations of reason. To take care that deduction not seem boring, or too simple, after all of the effort that has preceded it. That is a difficult task. In the opening lines of “The Adventure of the Copper Beeches,” Holmes reminds us, “To the man who loves art for its own sake, it is frequently in its least important and lowliest manifestations that the keenest pleasure is to be derived. . . . If I claim full justice for my art, it is because it is an impersonal thing—a thing beyond myself. Crime is common. Logic is rare.” Why? Logic is boring. We think we’ve already figured it out. In pushing past this preconception lies the challenge.

  Learning to Tell the Crucial from the Incidental

  So how do you start from the beginning and make sure that your deduction is going along the right track and has not veered fabulously off course before it has even begun?

  In “The Crooked Man,” Sherlock Holmes describes a new case, the death of Sergeant James Barclay, to Watson. At first glance the facts are strange indeed. Barclay and his wife, Nancy, were heard to be arguing in the morning room. The two were usually affectionate, and so the argument in itself was something of an event. But it became even more striking when the housemaid found the door to the room locked and its occupants unresponsive to her knocks. Add to that a strange name that she heard several times—David—and then the most remarkable fact of all: after the coachman succeeded in entering the room from outside through the open French doors, no key was to be found. The lady was lying insensible on the couch, the gentleman dead, with a jagged cut on the back of his head and his face twisted in horror. And neither one possessed the key that would open the locked door.

  How to make sense of these multiple elements? “Having gathered these facts, Watson,” Holmes tells the doctor, “I smoked several pipes over them, trying to separate those which were crucial from others which were merely incidental.” And that, in one sentence, is the first step toward successful deduction: the separation of those factors that are crucial to your judgment from those that are just incidental, to make sure that only the truly central elements affect your decision.

  Consider the following descriptions of two people, Bill and Linda. Each description is followed by a list of occupations an
d avocations. Your task is to rank the items in the list by the degree that Bill or Linda resembles the typical member of the class.

  Bill is thirty-four years old. He is intelligent but unimaginative, compulsive, and generally lifeless. In school he was strong in mathematics but weak in social studies and humanities.

  Bill is a physician who plays poker for a hobby.

  Bill is an architect.

  Bill is an accountant.

  Bill plays jazz for a hobby.

  Bill is a reporter.

  Bill is an accountant who plays jazz for a hobby.

  Bill climbs mountains for a hobby.

  Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

  Linda is a teacher in an elementary school.

  Linda works in a bookstore and takes yoga classes.

  Linda is active in the feminist movement.

  Linda is a psychiatric social worker.

  Linda is a member of the League of Women Voters.

  Linda is a bank teller.

  Linda is an insurance salesperson.

  Linda is a bank teller and is active in the feminist movement.

  After you’ve made your ranking, take a look at two pairs of statements in particular: Bill plays jazz for a hobby and Bill is an accountant who plays jazz for a hobby, and Linda is a bank teller and Linda is a bank teller and is active in the feminist movement. Which of the two statements have you ranked as more likely in each pair?

  I am willing to bet that it was the second one in both cases. If it was, you’d be with the majority, and you would be making a big mistake.

  This exercise was taken verbatim from a 1983 paper by Amos Tversky and Daniel Kahneman, to illustrate our present point: when it comes to separating crucial details from incidental ones, we often don’t fare particularly well. When the researchers’ subjects were presented with these lists, they repeatedly made the same judgment that I’ve just predicted you would make: that it was more likely that Bill was an accountant who plays jazz for a hobby than it was that he plays jazz for a hobby, and that it was more likely that Linda was a feminist bank teller than that she was a bank teller at all.