Can a machine feel anything? Could human subjectivity be simulated in cogs and wheels? Research on the role emotion plays in cognition has entered computation. Mostly these researchers are tweaking the computational model of mind, not overturning it. They are working apace to create better simulations of minds with emotion or affect as part of them. The authors of a book titled Emotional Cognitive Neural Algorithms with Engineering Applications: Dynamic Logic; From Vague to Crisp (2011), who have obviously not given up on computational methods and algorithms, tell the story of AI as a blinkered one:
For a long time people believed that intelligence is equivalent to conceptual understanding and reasoning. A part of this belief was that the mind works according to logic. Although it is obvious that the mind is not logical, over the course of the two millennia since Aristotle, and two hundred years since Newton, many people have identified the power of intelligence with logic. Founders of artificial intelligence in the 1950s and 60s . . . believed that by relying on rules of logic they would soon develop computers with intelligence far exceeding the human mind246 (my italics).
This, they acknowledge, did not happen. Therefore these authors are describing alternative computational methods and a dynamic form of logic in the hope that machine intelligence will begin to mimic human thinking more closely.
After all, even Descartes worked hard in The Passions of the Soul to show how mind and body interact and how our emotions are useful to us in living a good life. The very idea of CTM, however, isolates cognition and information processing from bodily movement and the senses. A machine mind can always be given a body, but the idea that a particular body has no effect on the mind’s essential algorithms perpetuates the mind-body divide. There are AI scientists who have abandoned the computational model altogether and turned to bodies for answers.
In his wonderfully titled book Passionate Engines: What Emotions Reveal About the Mind and Artificial Intelligence (2001), Craig DeLancey, in tune with a growing number of others, bemoans the failures of AI and notes that the best work in the field cannot even begin to “aspire to imitate some few features of an ant’s capabilities and accomplishments.” He goes on to argue that beginning “with pure symbol or proposition manipulation” will not result in “autonomous behavior” and declares the approach “a failure.” Not only that, he argues, “it reveals very deep prejudices about the mind . . . that are conceptually confused, unrealistic, and conflict with our best scientific understanding.”247 The man hopes to bring what he calls “deep affect” to AI by other means.
Rodney Brooks, director of MIT’s Computer Science and Artificial Intelligence Laboratory, rejected GOFAI in the 1980s by insisting that intelligence requires a body. In his 1991 essay, “Intelligence Without Reason,” Brooks emphasizes “situatedness” and “embodiment” rather than symbolic representations, a strategy that closely echoes Merleau-Ponty’s phenomenology.248 Brooks is emphatic about where GOFAI went wrong: “Real biological systems are not rational agents that take inputs, compute logically, and produce outputs.”249 Using an embodied model, the scientist has created “mobots” or “creatures” in the MIT artificial intelligence lab. Interestingly, these artificial beings have no “I” or “self” model, no central guiding intelligence. They navigate the environment around them by responding “intelligently” through sensors.
These creatures resemble insects more than human beings. He describes them as “a collection of competing behaviors without a central control.” Indeed, they make me think of Diderot’s swarm of bees and the philosopher’s meditation in D’Alembert’s Dream on whether the swarm is a single being or a mass of separate beings acting in concert. Brooks has created a collection of capabilities in motion. He has read Dreyfus because he mentions the philosopher, who was heavily influenced by Heidegger, in a book, but Brooks is somewhat allergic to philosophy in general. He contends that he is not interested in “the philosophical implications” of his creatures and, despite the resemblance his thought may have to Heidegger’s, he claims his work “was not so inspired” and is based “purely on engineering considerations.”250 Without any direct or perhaps without any acknowledged relation to philosophical ideas, Brooks has made significant progress in AI by thinking about artificial bodies and their role in “intelligence.”
In Flesh and Machines: How Robots Will Change Us (2002), he restates Freud’s famous comments about Copernicus and Darwin as assaults on human “self-love.” In Brooks’s rephrasing, self-love becomes human “specialness.” He suggests that the third assault is being delivered (not by psychoanalysis—Freud is not mentioned as the originator of this comment) but by machines: “We humans are being challenged by machines.”251 He makes this claim despite the fact that he repeatedly contends that human beings are also machines. “Anything that’s living is a machine. I’m a machine; my children are machines. I can step back and see them as being a bag of skin full of biomolecules that are interacting according to some laws.”252 For Brooks, we biological machines are threatened by nonbiological machines.
I vividly remember my lesson in the fifth grade on simple machines: the lever, wheel-axle, screw, pulley, wedge, and inclined plane. Each one was pictured on the filmstrip the class watched. A simple machine was a device that could alter the magnitude or direction of a force. From these machines one could build complex ones. They were machine building blocks. Can the nervous system as a whole be characterized as a machine? Is the placenta a temporary machine? What about the endocrine system? Harvey’s use of the hydraulic system to characterize the working of the heart was unusually effective. The machine allowed him to understand the organ. But is this true of all anatomical functions? Isn’t Damasio right that there is a difference between the living cell and the machinery involved in building a plane or a car?
There is a continual elision at work—not only in Brooks’s writing, but in many of the texts I have read on AI—between one kind of machine and another and between the living and the simulated. The question is: What is alive and how do we know when something is alive? I liked to pretend my dolls were alive, half wished they would come alive, but I knew they wouldn’t. I know my computer isn’t alive, even though it is a marvelous machine. I don’t worry that it hasn’t the right to vote in elections, despite its “intelligence” and “memory.” If it gets a virus, I don’t sit beside it worrying about how it feels. Distinguishing the living from the nonliving, however, is a genuine philosophical problem. Saint Augustine famously noted that he knew what time was but when he was asked to explain it, he found it impossible to put his knowledge into words. I think I recognize what life is, but can I explain what it is? After all, we now have people who are brain-dead, which is not the same as dead-dead. Rodney Brooks is well aware that his “creatures” don’t feel the way human beings do, but then he contends that the difference between “us” and “them” may be insignificant. “Birds can fly. Airplanes can fly. Airplanes do not fly exactly as birds do, but when we look at them from the point of view of fluid mechanics, there are common underlying physical mechanisms that both utilize.”253 Well, yes, but there is still a difference between the nervous system of a bird with its wings open in flight and a jet with a motor and wings that becomes airborne. Isn’t the bird alive and the plane dead, despite the fact that they both move through the air?
Work on simulating emotional responses in robots has been a significant project in the MIT lab. Again, Brooks seems to be uncertain about where to draw the line between internal feeling and the external appearance of feeling. Writing about simulated emotions in robots, he asks, “Are they real emotions or are they only simulated emotions? And even if they are only simulated emotions today, will the robots we build over the next few years come with real emotions?”254 (italics in original) He does not explain how that will happen, but he is inclined to leave the question of simulation of emotion and actual emotion blurry. In his preface, he mentions HAL with admiration and admits that for now the robots of science fiction and the machin
es in our daily life remain distant from each other. This gap, however, will soon be closed: “My thesis is that in just twenty years the boundary between fantasy and reality will be rent asunder. Just five years from now that boundary will be breached in ways that are as unimaginable to most people today as daily use of the World Wide Web was ten years ago.”255 I am writing this in 2015. In 2007, I did not see the fantasy/reality border collapse in “unimaginable” ways. Prediction, however, has become something of a sport in AI circles.
One of Brooks’s younger colleagues, Cynthia Breazeal, drew inspiration from developmental psychology and infant research to produce her robot Kismet, a big-eyed interactive robot head. In her book Designing Sociable Robots, she cites research on infant development and the infant-mother couple or “dyad” as inspiration for Kismet. She is clearly aware of the research, which has demonstrated that newborns are capable of imitating an adult’s facial expressions. She knows that every child is dependent on interactions with another person to develop normally. Breazeal cites Colwyn Trevarthen, an important infant researcher who coined the term “primary intersubjectivity,” now used to describe an infant’s earliest social interactions with other people.256 She credits this theory as essential to her design of Kismet. And yet, how does one go about giving feelings to computers, metal, wiring, and transistors?
Cynthia Breazeal had her own favorite fictional robots as a girl—not HAL, whom she found “creepy,” but R2-D2 and C-3PO from Star Wars. Breazeal designed Kismet to simulate infantile emotional facial responses to others. The machine does not talk but makes expressive facial movements and sounds that are pitched to mimic emotion in relation to its interlocutor. The robot is an amazing feat of interactive engineering. Kismet, she writes, “connects to people on a physical level, on a social level, and on an emotional level. It is jarring for people to play with Kismet and then see it turned off, suddenly becoming an inanimate object.”257 Along with its camera eyes and sensors, Kismet’s parts have been given the names of physiological systems. The robot has a “synthetic nervous system” and a “motivational system” complete with “drives.” Her use of the word “drive” appears to be more influenced by its use in psychology than in engineering, but Breazeal seems wholly unaware of the history of the word, which comes from the German Trieb, has its roots in philosophy, and played such a prominent role in Freud’s theory. Kismet’s parts are not organic but mechanical, and although the machine has been programmed to simulate “six basic emotions”—anger, disgust, fear, happiness, sadness, and surprise, each of which results in a facial expression—can this moving head be called an emotional machine?258
The question is not purely one of semantics but of vocabulary, similar to Damasio’s comment about “homeostasis” in relation to organisms and planes, as well as the problem Karl Pribram identified as crucial to the way his colleagues in neuroscience received Freud’s Project. To give a mechanical system a biological name—“synthetic nervous system”—obfuscates the fact that it does not function at all like an organic human nervous system. There is no artificial brain in Kismet with billions of neurons, no limbic system in that brain, no enteric nervous system, no nerve endings, no endocrine system, but by assigning it a biological name preceded by the word “synthetic,” it becomes a kind of nervous system.
If Rodney Brooks seems unsure about the difference between felt emotions and the appearance of emotions, so does Breazeal. The goals she envisions for her creature have a tendency to merge with what she has actually produced:
Humans are the most socially advanced of all species. As one might imagine, an autonomous humanoid robot that could interpret, respond, and deliver human-style social cues even at the level of a human infant is quite a sophisticated machine. Hence, this book explores the simplest kind of human-style social interaction and learning, that which occurs between a human infant with its caregiver. My primary interest in building this kind of sociable, infant-like robot is to explore the challenges of building a socially intelligent machine that can communicate with and learn from people.259
First, characterizing interactions between an infant and “caregiver” as “the simplest kind of human-style social interaction and learning” glosses over what is actually involved in the exchanges between them. Researchers continue to analyze the enormously complex relations that take place within the parent-infant dyad, and the intricate physiology of these interactions is by no means simple, nor is it by any means fully understood. The parent-infant dyad consists of two sentient beings who are engaged with each other. While an infant may not be reflectively self-conscious, she is certainly prereflectively conscious. She possesses precisely what Kismet does not—experienced feelings.
“Synchrony” is a word used to identify the dynamic and reciprocal physiological and behavioral adaptations that take place between a parent and baby over time. Scientists research gaze, vocal, and affective or emotional synchronies. For example, Ruth Feldman and her colleagues studied infants and mothers who engaged in face-to-face interactions but did not touch each other. During these “episodes of interaction synchrony,” mothers and infants had coordinated heart rhythms: “Results of the present study demonstrate that human mothers and infants engage in a process of bio-behavioral synchrony as it was initially defined in other mammals—the regulation of infant physiology by means of social contact. During face-to-face interactions mothers adapt their heart rhythms to those of their infant’s [sic] and infants, in turn, adapt their rhythms to those of the mother’s within lags of less than 1 s [one second], forming biological synchrony in the acceleration and deceleration of heart rate.”260 It has become clear that such synchronies are vital to an infant’s development, including brain development. Obviously, neither mother nor baby is aware of coordinating their heartbeats, but how does one go about imitating such subtle interactions if one of the partners has no heart—in fact, no biological systems whatsoever?
Second, one has to read Breazeal’s passage carefully to glean that she is not claiming that Kismet learns anything and therefore is not socially intelligent in the sense that its synthetic nervous system develops over time through its encounters with others as a human infant’s does. It is “intelligently” responsive to various visual and auditory cues, but it is reliant on engineered programming for its “development.” In her book, Breazeal explains, “In the near future, these interaction dynamics could play an important role in socially situated learning for Kismet.”261 The near future is not now. The fact that a sophisticated scholar such as Elizabeth A. Wilson misses this point in her book Affect and Artificial Intelligence suggests to me that Breazeal’s wishes for future research are easy to conflate with her accomplishments in the present. Her prose is mushy on the question. Wilson misunderstands that Breazeal’s hopes for future robots have merged with the one she has already designed when she writes, “Kismet’s designers used these expressions and interactions they provoke with a human ‘caretaker’ as scaffolding for learning. Kismet’s intelligence was not programmed in, it grew out of Kismet’s embodied, affectively oriented interactions with others.”262 Although I sympathize with Wilson’s confusion, her statement is simply not true. In an interview with the New York Times, a journalist pointedly asked Breazeal if Kismet learned from people. “From an engineering standpoint, Kismet got more sophisticated. As we continued to add more abilities to the robot, it could interact with people in richer ways . . . But I think we learned mostly about people from Kismet”263 (my italics). What, in fact, did they learn?
People treated Kismet as an interactive animated thing, which is exactly what it is, but does this tell us anything new about people? Human beings engage emotionally with fictive beings of all kinds, not just toys or robots but characters in novels, figures on canvases, the imaginary people actors embody in films and onstage, and avatars in many forms of interactive and virtual games. Are the feelings people have for a responsive mechanical head such as Kismet, whose author is Breazeal and her team, qualitatively diffe
rent from the ones they have for, say, Jane Eyre or Elizabeth Bennet or Raskolnikov? This is not a rhetorical question. Arguably, in terms of complex emotional experiences, a good novel outstrips the rewards offered by Kismet. But then, no character on the page will nod or babble back if you talk to her or him.
There is a peculiar form of myopia displayed by many people locked inside their own fields. Their vision has become so narrow they can no longer make the most obvious distinctions or connections. Human beings are responding to fictive beings all the time. Of course people will respond to a cute mechanical head that imitates human facial expressions! Human responsiveness to Kismet does not lend it actual feelings, allow it to learn, or bring the machine any closer to that desired state. The question of what is real and what is simulated or virtual, however, is an ongoing one, treated alternately with paranoia and celebration but only rarely thought through with, well, any “intelligence.”
Brooks and Breazeal have employed embodied models for their robots, and it would be preposterous to say that they have not enjoyed success. Their creatures are marvels. They are impressive automatons. Do their robots feel anything more than my talking doll or the eighteenth-century defecating duck? Can human sentience be simulated? The gap between science fiction and reality has not been closed. What we can be sure of is that dreams of HAL and C-3PO have infected the mobots and the baby-like Kismet. Are feeling, sentient robots with “real emotions” just around the corner? Can silicon and aluminum and electrical wiring and cameras and software when cleverly designed mimic the motor-sensory-affective states that appear to be necessary for human learning and dynamic development in organic creatures, even very simple ones?