The Delusions of Certainty
In an essay first published in the Frankfurter Allgemeine Zeitung, David Gelernter, a professor of computer science at Yale and chief scientist at Mirror Worlds Technologies, who has identified himself as an “anti-cognitivist,” argues, “No computer will be creative unless it can simulate all the nuances of human emotion.” In the course of a few paragraphs, Gelernter refers to Rimbaud, Coleridge, Rilke, Blake, Kafka, Dante, Büchner, Shelley (Percy), and T. S. Eliot. In opposition to the computational model, he maintains, “The thinker and his thought stream are not separate.”264 This idea is very close to William James, who was the first to use the expression “stream of consciousness” and identify it with a self. Gelernter resembles a Pragmatist, not a Cartesian. He does not believe human thought can be equated with symbol manipulation. You need the whole person and her feelings. He is interested in creativity, in the continuums of human consciousness, and he is fairly optimistic that with time artificial intelligence may solve many problems involved in the imitation of human processes, including emotion. He does not believe, however, that an “intelligent” computer will ever experience anything. It will never be conscious. “It will say,” he writes, “ ‘that makes me happy,’ but it won’t feel happy. Still: it will act as if it did.”265 The distinction, it seems to me, is vital.
I have no doubt that these artificial systems will become increasingly animate and complex and that researchers will draw on multiple theoretical models of human and animal development to achieve their goals, but it is obvious to me that the disagreements about what can be done in artificial intelligence rest not only on different philosophical paradigms but also on the imaginative flights one scientist takes as opposed to another, on what each one hopes to achieve and believes he or she can achieve, even when the road ahead is enveloped in heavy fog. Brooks, for example, does not tell us how he will create “real emotions” as opposed to “simulated” ones, but he tells us confidently that it is part of the future plan. Gelernter does not believe software will ever produce subjectivity or consciousness, but he believes simulations will continue apace. They are in fundamental disagreement, a disagreement that is shaped by their learning, their interests, and their fantasies about the future.
I must emphasize that my discussion here does not deny the aid offered by increasingly complex computers that organize data in ways never dreamed of before in many fields. There are machines that produce answers to calculations much faster than any human being. There are robots programmed to mimic human expressions and interact with us, machines that beat the best of us at chess, and machines that can now scramble over rocks and onto Martian terrain. Nor am I arguing that mathematical or computational models should be banned from thinking about biology. Computers are now used to create beautiful, colorful simulations of biological processes, and these simulations can provide insights into how complex systems of many kinds work. Computational forms have become strikingly diverse and their applications are myriad. These stunning technological achievements should not prohibit us from making distinctions, however. And they should not lead to a confusion of a model for reality with reality itself. The fact that planes and birds fly does not destroy the differences between them, and those differences remain even when we recognize that fluid mechanics are applicable in both cases.
Anyone who has pressed “translate” while doing research on the Internet knows that computer translations from one language to another are egregious. The garbled sentences that appear in lieu of a “translation” deserve consideration. Here is a sample from a Google translation from French to English taken from a short biography of the philosopher Simone Weil: “The strength and clarity of his thought mania paradox grows logical rigor and meditation in then ultimate consequences.” To argue that the nuances of semantic meanings have escaped this program’s “mind” is an understatement. Language may use rules, but it also involves countless ineffable factors that scientists have been unable to fix in a computational mode. Indeed, if language were a logic-based system of signs with a universal grammar that could be understood mathematically, then we should have beautiful computer translations, shouldn’t we?
But computers do not feel meanings the way a human translator or interpreter does. From my perspective, this failure is revelatory. Language is not a disembodied code that machines can process easily. Words are at once outside us in the world and inside us, and their meanings shift over time and place. A word means one thing in one context and another thing elsewhere. A wonderful example of a contextual error involves a story I was told about the French translation of one of Philip Roth’s novels. A baseball game is described in the book. A player “runs home.” In English, this means that the runner touches home plate, a designated corner of the diamond, with his foot. In the French, however, the player took off for his own house.
Newspaper articles can now be generated by computers—reports on sports events, fires, and weather disasters. The ones I have read are formulaic and perfectly legible, and they surely mark an advance in basic computer composition. They are stunningly dull, however. After reading three or four of these brief reports, I felt as if I had taken a sedative. Whether they could be well translated into other languages by computers is another question. Words accrue and lose meaning through a semantic mobility dependent on the community in which they thrive, and these meanings cannot be divorced from bodily sensation and emotion. Slang emerges among a circle of speakers. Irony requires double consciousness, reading one meaning and understanding another. Elegant prose involves a feeling for the rhythms and the music of sentences, a product of the sensual pleasure a writer takes in the sounds of words and the varying metric beats of sentences. Creative translation must take all this into account. If a meaning is lost in one sentence, it might be gained or added to the next one. Such considerations are not strictly logical. They do not involve a step-by-step plan but come from the translator’s felt understanding of the two languages involved.
Rodney Brooks is right that the distance between real machines and the fictional HAL had not been breached in 2002, and it has not been breached now. In AI the imaginative background has frequently become foreground. Fantasies and fictions infect beliefs and ideas. Human beings are remembering and imaginative creatures. Wishes and dreams belong to science, just as they belong to art and poetry, and this is decidedly not a bad thing.
An Aside on Niels Bohr and Søren Kierkegaard
I offer a well-known comment by the physicist Niels Bohr, who made important discoveries about the character of the atom and early contributions to quantum theory: “When it comes to atoms,” Bohr wrote, “language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images and establishing mental connections.”266 This remark links the poet and the physicist as imaginative beings. Bohr’s education is no doubt behind the fact that he links his own work to the poet’s work. The physicist loved poetry, especially the poems of Goethe, and he strongly identified himself with the great German artist and intellectual. Bohr continually quoted literary artists he admired. He read Dickens passionately and liked to conjure vivid pictures for entities in physics—electrons as billiard balls or atoms as plum puddings with jumping raisins—a proclivity that no doubt lies behind his idea that images serve physics as well as poetry.267 I find that images are extremely helpful for understanding ideas, and for many people a plum pudding with animated raisins is more vivid than images of tapes of code or hardware and software. It is not surprising either that Bohr felt a kinship with his fellow Dane Søren Kierkegaard, a philosopher who was highly critical of every totalizing intellectual system and of science itself when it purported to explain everything. For Kierkegaard, objectivity as an end in itself was wrongheaded, because it left out the single individual and subjective experience.
In Concluding Unscientific Postscript, Kierkegaard’s pseudonym, Climacus, strikes out at the hubris of science. In his “introduction,” which follows his “preface,” Climacus lets an iron
ic arrow fly: “Honor be to learning and knowledge; praised be the one who masters the material with the certainty of knowledge, with the reliability of autopsy.”268 Although Concluding Unscientific Postscript is a work of critical philosophy, it also incorporates high parody of flocks of assistant professors who churn out one deadly paragraph and parenthesis after another, written for “paragraph gobblers” who are held sway under the “tyranny of sullenness and obtuseness and rigidity.”269 Kierkegaard’s pseudonym is right that the so-called intellectual life in every field often kills the objects it studies. Nature, human history, and ideas themselves are turned into corpses for dissection. Mastery of material, after all, implies subjugation, not interaction, and a static object, not a bouncing one. After reading Stages on Life’s Way, Bohr wrote in a letter, “He [Kierkegaard] made a powerful impression on me when I wrote my dissertation at a parsonage on Funen, and I read his works day and night . . . His honesty and willingness to think the problems through to their very limit is what is great. And his language is wonderful, often sublime.”270
Kierkegaard did take questions to their limit, to the very precipice of comprehension, and, at the edge of that cliff, he understood a jump was needed, a jump into faith. The difference between this philosopher and many other philosophers is that he knew a leap had to be made. He did not disguise the leap with arguments that served as systematic bridges. Kierkegaard was a Christian but one of a very particular kind. Like Descartes, he was impatient with received ideas. Unlike Descartes, he did not think one could reason one’s way into ultimate truths. Bohr’s close friend, the professor of philosophy Harald Høffding, wrote a précis on Kierkegaard and argued that no theory is complete, that contradictions are inevitable. “Neither [a secure fact nor a complete theory] is given in experience,” Høffding wrote, “nor can either be adequately supplied by our reason; so that, above and below, thought fails to continue, and terminates against an ‘irrational.’ ”271 Arguably, without what J. L. Heilbron calls Bohr’s “high tolerance for ambiguity,” he might not have made the leap in thought he had to make, a creative, imaginative leap that could encompass a paradoxical truth.272 Quantum theory would, after all, turn Newton’s clockwork into a weird, viscous, unpredictable dual state of waves and particles that was dependent on an observer. I do not pretend to understand how physicists arrived at quantum theory. As one kind young physicist said to me after I had exhausted him with questions, “Siri, it’s true that the metaphysics of physics is easier than the physics.” This statement is debatable, but its wit charms me nevertheless.
I am also well aware that there are any number of contemporary scientists who look askance at the comments made by Bohr and other physicists of the time in a metaphysical vein. In Physics and Philosophy, Bohr’s friend Werner Heisenberg wrote, “Both science and art form in the course of the centuries a human language by which we can speak about the more remote parts of reality, and the coherent sets of concepts as well as the different styles of art are different words or groups of words in this language.”273 Thoughts, whether in philosophy or in science or in art, cannot be separated from the thinker, but they cannot be separated from the community of thinkers and speakers either. What articulated thoughts could we possibly have if we weren’t among other thinkers and speakers?
Wet or Dry?
The artificial intelligence project, as Turing saw it, was about reproduction, about reproducing the human in the machine, making autonomous brains or whole beings in a new way. He understood that interest in food, sex, and sports couldn’t be easily reproduced in such a machine, and he understood that sensation and locomotion played a role that might make the enterprise extremely difficult. Turing pursued embryology for a time, and, in 1952, he proposed a mathematical model for the growing embryo, one that continues to be debated among people in the field.274 Turing was keenly aware of the nature of models, and he knew he had stepped outside his field of expertise by creating one for biology. “This model,” he wrote, “will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge.”275 Heisenberg once said, “We have to remember that what we observe is not nature herself, but nature exposed to our method of questioning.”276 A model, whether mathematical or otherwise, is useful to frame a question about the world, but that does not mean the model is the world.
In science there are ways of testing models. Some work and others don’t. Some models, such as string theory in physics, may be proven at a future date but at present remain purely theoretical. Mind as information-processing machine has passionate advocates and opponents. In a textbook titled From Computer to Brain: Foundations of Computational Neuroscience (2002), William Lytton explains the benefits of models. Like McCulloch and Pitts, von Neumann, and Turing, Lytton argues for simplification. Lytton’s book rests on the unquestioned assumption that the mind is a computer, a hypothesis that over the course of half a century has atrophied into a truth he knows will not be questioned by his computational peers. Again, isn’t this what Goethe worried about? Notice how different Lytton’s tone is from that of all the earlier scientists mentioned above, who were quick to say they knew actual brain processes were far more complex than their models. Notice, too, the cutting, paring metaphor Lytton uses to describe the process.
We show how concept neurons differ from real neurons. Although a recitation of these differences makes it look like these are lousy models, they are not. The concept neurons are attempts to get to the essence of neural processing by ignoring irrelevant detail and focusing only on what is needed to do a computational task. The complexities of the neuron must be aggressively pared in order to cut through the biological subtleties and really understand what is going on277 (my italics).
Significantly, this book is not called From Brain to Computer. Lytton argues in exactly the opposite direction from Peter beim Graben and James Wright, whose plaintive statement is worth quoting again: “Sadly, we do not know which physical properties are truly essential.” How does one know that the neuronal “complexities,” which are being so “aggressively pared” away, aren’t important? Much remains to be understood about the brain. Why must biological subtleties be “cut through”? Who is the judge of “irrelevant detail” and how does one know that an “essence” is revealed and not a convenient fiction forced into a now classical computational box? An example from genetics may be valuable. Many people have heard of “junk DNA,” DNA that was faithfully copied over generations but had no apparent purpose, hence the derogatory name. Geneticists now know that it does in fact have a purpose. It is not “junk” at all.278
Flesh-and-blood human beings are wet, not dry creatures. It seems clear to me that Richard Dawkins’s “throbbing gels and oozes” are meant to stand in for a wet, living embryo or for a whole biological, material body. Dawkins’s phrase eschews wetness and the immense complexity of embryology, which he plainly states he would rather not “contemplate.” Computational theory of mind, with its concept neurons and its notion of the “mental” as an information-processing machine that may be considered independently of brain function, mechanizes and “dries” the mind into comprehensible algorithms. I am using the word “wet” to signal not just the watery human brain or the moistness of corporeal reality but also as a guiding metaphor that may take us further toward the fears and wishes that hide beneath CTM.
We human animals ingest the world in many ways, when we eat and chew and breathe. We take in the world with our eyes, ears, and noses, and we taste it with our tongues and experience its textures on our skin. We urinate and defecate and vomit and cry tears, and we spit and sweat and menstruate, make milk and sperm, leak vaginal fluids and snot. Our skin forms a boundary around each of us, but that too is porous and can be punctured. We kiss and penetrate one another in various erotic ways, and we copulate, and some forms of corporeal entanglements produce children. And when a woman gives birth, she pushes
the infant out of her body. The newborn baby arrives in the world covered in blood and fluids. The placenta, the afterbirth, oozes out of the mother’s vagina. And we die. Our organic bodies putrefy and disintegrate, and we disappear from the world. Are these natural processes separable from our thoughts and our words? That is the question. Turing wanted to include development in his idea of making a machine mind. Embryology fascinated him, and he proposed the infant mind as a machine to be trained and shaped, but “wetness” was never part of the equation. Wasn’t Turing right in saying that this model left out sex and food and sport and much that we value in our lives? Isn’t this a problem?
Biological realities of human development play no role in computational theory of mind except as an inferior gelatinous substrate or annoying complexities to be cut through so a conceptual essence can be revealed. The leaky, moist, material body is not part of its model of mind. The embryo, the growing or the grown-up body that throbs and oozes, has nothing to do with intelligence or the mind advocates of GOFAI hoped to build—a smart, clean, man machine. The computer has inputs and outputs, a restricted entryway and exit. The fantasy of the Replicant or Replicator is clearly alive in David Deutsch’s idea of “Analytical Enginekind,” an alternative being, which may be made of other stuff, but to suppose any inferiority in such a “kind” amounts to “racism.”
Doesn’t this theory harbor a wish for a beautiful, dry, thinking machine, a new race that will not grow in or be born from the organic maternal body or from organic materials at all: no wimpy dependent genes, no mother’s egg, no father’s sperm, no embryo, no placenta, no uterine environment, no birth by pushing one person out of another, but rather a new kind of person made from cogs and wheels or digital ones and zeros? All matter, all gels and oozes, will be avoided. Jonathan Swift’s satirical poem “The Lady’s Dressing Room” springs to mind with its immortal line, “Oh! Celia, Celia, Celia shits!” The same poem contains this couplet: “Should I the Queen of Love refuse / Because she rose from stinking ooze?”279 The new race will be born straight out of scientists’ computational minds or discovered somewhere else in the universe. The story of wet, organic, mammalian development is suppressed. There is no beginning and no end, no birth and no death because the new race will be immortal.