What Just Happened: A Chronicle From the Information Frontier
The next year Shannon returned with a robot. It was not a very clever robot, nor lifelike in appearance, but it impressed the cybernetics group. It solved mazes. They called it Shannon’s rat.
He wheeled out a cabinet with a five-by-five grid on its top panel. Partitions could be placed around and between any of the twenty-five squares to make mazes in different configurations. A pin could be placed in any square to serve as the goal, and moving around the maze was a sensing rod driven by a pair of little motors, one for east-west and one for north-south. Under the hood lay an array of electrical relays, about seventy-five of them, interconnected, switching on and off to form the robot’s “memory.” Shannon flipped the switch to power it up.
“When the machine was turned off,” he said, “the relays essentially forgot everything they knew, so that they are now starting afresh, with no knowledge of the maze.” His listeners were rapt. “You see the finger now exploring the maze, hunting for the goal. When it reaches the center of a square, the machine makes a new decision as to the next direction to try.”♦ When the rod hit a partition, the motors reversed and the relays recorded the event. The machine made each “decision” based on its previous “knowledge”—it was impossible to avoid these psychological words—according to a strategy Shannon had designed. It wandered about the space by trial and error, turning down blind alleys and bumping into walls. Finally, as they all watched, the rat found the goal, a bell rang, a lightbulb flashed on, and the motors stopped.
Then Shannon put the rat back at the starting point for a new run. This time it went directly to the goal without making any wrong turns or hitting any partitions. It had “learned.” Placed in other, unexplored parts of the maze, it would revert to trial and error until, eventually, “it builds up a complete pattern of information and is able to reach the goal directly from any point.”♦
To carry out the exploring and goal-seeking strategy, the machine had to store one piece of information for each square it visited: namely, the direction by which it last left the square. There were only four possibilities—north, west, south, east—so, as Shannon carefully explained, two relays were assigned as memory for each square. Two relays meant two bits of information, enough for a choice among four alternatives, because there were four possible states: off-off, off-on, on-off, and on-on.
Next Shannon rearranged the partitions so that the old solution would no longer work. The machine would then “fumble around” till it found a new solution. Sometimes, however, a particularly awkward combination of previous memory and a new maze would put the machine in an endless loop. He showed them: “When it arrives at A, it remembers that the old solution said to go to B, and so it goes around the circle, A, B, C, D, A, B, C, D. It has established a vicious circle, or a singing condition.”♦
“A neurosis!” said Ralph Gerard.
Shannon added “an antineurotic circuit”: a counter, set to break out of the loop when the machine repeated the same sequence six times. Leonard Savage saw that this was a bit of a cheat. “It doesn’t have any way to recognize that it is ‘psycho’—it just recognizes that it has been going too long?” he asked. Shannon agreed.
SHANNON AND HIS MAZE (Illustration credit 8.2)
“It is all too human,” remarked Lawrence K. Frank.
“George Orwell should have seen this,” said Henry Brosin, a psychiatrist.
A peculiarity of the way Shannon had organized the machine’s memory—associating a single direction with each square—was that the path could not be reversed. Having reached the goal, the machine did not “know” how to return to its origin. The knowledge, such as it was, emerged from what Shannon called the vector field, the totality of the twenty-five directional vectors. “You can’t say where the sensing finger came from by studying the memory,” he explained.
“Like a man who knows the town,” said McCulloch, “so he can go from any place to any other place, but doesn’t always remember how he went.”♦
Shannon’s rat was kin to Babbage’s silver dancer and the metal swans and fishes of Merlin’s Mechanical Museum: automata performing a simulation of life. They never failed to amaze and entertain. The dawn of the information age brought a whole new generation of synthetic mice, beetles, and turtles, made with vacuum tubes and then transistors. They were crude, almost trivial, by the standards of just a few years later. In the case of the rat, the creature’s total memory amounted to seventy-five bits. Yet Shannon could fairly claim that it solved a problem by trial and error; retained the solution and repeated it without the errors; integrated new information from further experience; and “forgot” the solution when circumstances changed. The machine was not only imitating lifelike behavior; it was performing functions previously reserved for brains.
One critic, Dennis Gabor, a Hungarian electrical engineer who later won the Nobel Prize for inventing holography, complained, “In reality it is the maze which remembers, not the mouse.”♦ This was true up to a point. After all, there was no mouse. The electrical relays could have been placed anywhere, and they held the memory. They became, in effect, a mental model of a maze—a theory of a maze.
The postwar United States was hardly the only place where biologists and neuroscientists were suddenly making common cause with mathematicians and electrical engineers—though Americans sometimes talked as though it was. Wiener, who recounted his travels to other countries at some length in his introduction to Cybernetics, wrote dismissively that in England he had found researchers to be “well-informed” but that not much progress had been made “in unifying the subject and in pulling the various threads of research together.”♦ New cadres of British scientists began coalescing in response to information theory and cybernetics in 1949—mostly young, with fresh experience in code breaking, radar, and gun control. One of their ideas was to form a dining club in the English fashion—“limited membership and a post-prandial situation,” proposed John Bates, a pioneer in electroencephalography. This required considerable discussion of names, membership rules, venues, and emblems. Bates wanted electrically inclined biologists and biologically oriented engineers and suggested “about fifteen people who had Wiener’s ideas before Wiener’s book appeared.”♦ They met for the first time in the basement of the National Hospital for Nervous Diseases, in Bloomsbury, and decided to call themselves the Ratio Club—a name meaning whatever anyone wanted. (Their chroniclers Philip Husbands and Owen Holland, who interviewed many of the surviving members, report that half pronounced it RAY-she-oh and half RAT-ee-oh.♦) For their first meeting they invited Warren McCulloch.
They talked not just about understanding brains but “designing” them. A psychiatrist, W. Ross Ashby, announced that he was working on the idea that “a brain consisting of randomly connected impressional synapses would assume the required degree of orderliness as a result of experience”♦—in other words, that the mind is a self-organizing dynamical system. Others wanted to talk about pattern recognition, about noise in the nervous system, about robot chess and the possibility of mechanical self-awareness. McCulloch put it this way: “Think of the brain as a telegraphic relay, which, tripped by a signal, emits another signal.” Relays had come a long way since Morse’s time. “Of the molecular events of brains these signals are the atoms. Each goes or does not go.” The fundamental unit is a choice, and it is binary. “It is the least event that can be true or false.”♦
They also managed to attract Alan Turing, who published his own manifesto with a provocative opening statement—“I propose to consider the question, ‘Can machines think?’ ”♦—followed by a sly admission that he would do so without even trying to define the terms machine and think. His idea was to replace the question with a test called the Imitation Game, destined to become famous as the “Turing Test.” In its initial form the Imitation Game involves three people: a man, a woman, and an interrogator. The interrogator sits in a room apart and poses questions (ideally, Turing suggests, by way of a “teleprinter communicating between the two rooms??
?). The interrogator aims to determine which is the man and which is the woman. One of the two—say, the man—aims to trick the interrogator, while the other aims to help reveal the truth. “The best strategy for her is probably to give truthful answers,” Turing suggests. “She can add such things as ‘I am the woman, don’t listen to him!’ but it will avail nothing as the man can make similar remarks.”
But what if the question is not which gender but which genus: human or machine?
It is understood that the essence of being human lies in one’s “intellectual capacities”; hence this game of disembodied messages transmitted blindly between rooms. “We do not wish to penalise the machine for its inability to shine in beauty competitions,” says Turing dryly, “nor to penalise a man for losing in a race against an aeroplane.” Nor, for that matter, for slowness in arithmetic. Turing offers up some imagined questions and answers:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Before proceeding further, however, he finds it necessary to explain just what sort of machine he has in mind. “The present interest in ‘thinking machines,’ ” he notes, “has been aroused by a particular kind of machine, usually called an ‘electronic computer’ or ‘digital computer.’ ”♦ These devices do the work of human computers, faster and more reliably. Turing spells out, as Shannon had not, the nature and properties of the digital computer. John von Neumann had done this, too, in constructing a successor machine to ENIAC. The digital computer comprises three parts: a “store of information,” corresponding to the human computer’s memory or paper; an “executive unit,” which carries out individual operations; and a “control,” which manages a list of instructions, making sure they are carried out in the right order. These instructions are encoded as numbers. They are sometimes called a “programme,” Turing explains, and constructing such a list may be called “programming.”
The idea is an old one, Turing says, and he cites Charles Babbage, whom he identifies as Lucasian Professor of Mathematics at Cambridge from 1828 to 1839—once so famous, now almost forgotten. Turing explains that Babbage “had all the essential ideas” and “planned such a machine, called the Analytical Engine, but it was never completed.” It would have used wheels and cards—nothing to do with electricity. The existence (or nonexistence, but at least near existence) of Babbage’s engine allows Turing to rebut a superstition he senses forming in the zeitgeist of 1950. People seem to feel that the magic of digital computers is essentially electrical; meanwhile, the nervous system is also electrical. But Turing is at pains to think of computation in a universal way, which means in an abstract way. He knows it is not about electricity at all:
Since Babbage’s machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance.… The feature of using electricity is thus seen to be only a very superficial similarity.♦
Turing’s famous computer was a machine made of logic: imaginary tape, arbitrary symbols. It had all the time in the world and unbounded memory, and it could do anything expressible in steps and operations. It could even judge the validity of a proof in the system of Principia Mathematica. “In the case that the formula is neither provable nor disprovable such a machine certainly does not behave in a very satisfactory manner, for it continues to work indefinitely without producing any result at all, but this cannot be regarded as very different from the reaction of the mathematicians.”♦ So Turing supposed it could play the Imitation Game.
He could not pretend to prove that, of course. He was mainly trying to change the terms of a debate he considered largely fatuous. He offered a few predictions for the half century to come: that computers would have a storage capacity of 109 bits (he imagined a few very large computers; he did not foresee our future of ubiquitous tiny computing devices with storage many magnitudes greater than that); and that they might be programmed to play the Imitation Game well enough to fool some interrogators for at least a few minutes (true, as far as it goes).
The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.♦
He did not live to see how apt his prophecy was. In 1952 he was arrested for the crime of homosexuality, tried, convicted, stripped of his security clearance, and subjected by the British authorities to a humiliating, emasculating program of estrogen injections. In 1954 he took his own life.
Until years later, few knew of Turing’s crucial secret work for his country on the Enigma project at Bletchley Park. His ideas of thinking machines did attract attention, on both sides of the Atlantic. Some of the people who found the notion absurd or even frightening appealed to Shannon for his opinion; he stood squarely with Turing. “The idea of a machine thinking is by no means repugnant to all of us,” Shannon told one engineer. “In fact, I find the converse idea, that the human brain may itself be a machine which could be duplicated functionally with inanimate objects, quite attractive.” More useful, anyway, than “hypothecating intangible and unreachable ‘vital forces,’ ‘souls’ and the like.”♦
Computer scientists wanted to know what their machines could do. Psychologists wanted to know whether brains are computers—or perhaps whether brains are merely computers. At midcentury computer scientists were new; but so, in their way, were psychologists.
Psychology at midcentury had grown moribund. Of all the sciences, it always had the most difficulty in saying what exactly it studied. Originally its object was the soul, as opposed to the body (somatology) and the blood (hematology). “Psychologie is a doctrine which searches out man’s Soul, and the effects of it; this is the part without which a man cannot consist,”♦ wrote James de Back in the seventeenth century. Almost by definition, though, the soul was ineffable—hardly a thing to be known. Complicating matters further was the entanglement (in psychology as in no other field) of the observer with the observed. In 1854, when it was still more likely to be called “mental philosophy,” David Brewster lamented that no other department of knowledge had made so little progress as “the science of mind, if it can be called a science.”♦
Viewed as material by one inquirer, as spiritual by another, and by others as mysteriously compounded as both, the human mind escapes from the cognisance of sense and reason, and lies, a waste field with a northern exposure, upon which every passing speculator casts his mental tares.
The passing speculators were still looking mainly inward, and the limits of introspection were apparent. Looking for rigor, verifiability, and perhaps even mathematicization, students of the mind veered in radically different directions by the turn of the twentieth century. Sigmund Freud’s path was only one. In the United States, William James constructed a discipline of psychology almost single-handed—professor of the first university courses, author of the first comprehensive textbook—and when he was done, he threw up his hands. His own Principles of Psychology, he wrote, was “a loathsome, distended, tumefied, bloated, dropsical mass, testifying to but two facts: 1st, that there is no such thing as a science of psychology, and 2nd, that WJ is an incapable.”♦
In Russia, a new strain of psychology began with a physiologist, Ivan Petrovich Pavlov, known for his Nobel Prize–winning study of digestion, who scorned the word psychology and all its associated terminology. James, in his better moods, considered psychology the science of mental life, but for Pavlov there was no mind, only behavior. Mental states, thoughts, emotions, goals, and purpose—all these were intangible, subjective, and out of reach. They bore the taint of religion and superstition. What James had identified as central topics—“the stream of thought,” “the consciousness of self,” the perception of time and space, imagination, reasoning, and will—had no
place in Pavlov’s laboratory. All a scientist could observe was behavior, and this, at least, could be recorded and measured. The behaviorists, particularly John B. Watson in the United States and then, most famously, B. F. Skinner, made a science based on stimulus and response: food pellets, bells, electric shocks; salivation, lever pressing, maze running. Watson said that the whole purpose of psychology was to predict what responses would follow a given stimulus and what stimuli could produce a given behavior. Between stimulus and response lay a black box, known to be composed of sense organs, neural pathways, and motor functions, but fundamentally off limits. In effect, the behaviorists were saying yet again that the soul is ineffable. For a half century, their research program thrived because it produced results about conditioning reflexes and controlling behavior.
Behaviorists said, as the psychologist George Miller put it afterward: “You talk about memory; you talk about anticipation; you talk about your feelings; you talk about all these mentalistic things. That’s moonshine. Show me one, point to one.”♦ They could teach pigeons to play ping-pong and rats to run mazes. But by midcentury, frustration had set in. The behaviorists’ purity had become a dogma; their refusal to consider mental states became a cage, and psychologists still wanted to understand what the mind was.
Information theory gave them a way in. Scientists analyzed the processing of information and built machines to do it. The machines had memory. They simulated learning and goal seeking. A behaviorist running a rat through a maze would discuss the association between stimulus and response but would refuse to speculate in any way about the mind of the rat; now engineers were building mental models of rats out of a few electrical relays. They were not just prying open the black box; they were making their own. Signals were being transmitted, encoded, stored, and retrieved. Internal models of the external world were created and updated. Psychologists took note. From information theory and cybernetics, they received a set of useful metaphors and even a productive conceptual framework. Shannon’s rat could be seen not only as a very crude model of the brain but also as a theory of behavior. Suddenly psychologists were free to talk about plans, algorithms, syntactic rules. They could investigate not just how living creatures react to the outside world but how they represent it to themselves.