Not only does Nao have a personality, he can actually have several of them. Because he learns from his interactions with humans and each interaction is unique, eventually different personalities begin to emerge. For example, one personality might be quite independent, not requiring much human guidance. Another personality might be timid and fearful, scared of objects in a room, constantly requiring human intervention.

  The project leader for Nao is Dr. Lola Cañamero, a computer scientist at the University of Hertfordshire. To start this ambitious project, she analyzed the interactions of chimpanzees. Her goal was to reproduce, as closely as she could, the emotional behavior of a one-year-old chimpanzee.

  She sees immediate applications for these emotional robots. Like Dr. Breazeal, she wants to use these robots to relieve the anxiety of young children who are in hospitals. She says, “We want to explore different roles—the robots will help the children to understand their treatment, explain what they have to do. We want to help the children to control their anxiety.”

  Another possibility is that the robots will become companions at nursing homes. Nao could become a valuable addition to the staff of a hospital. At some point, robots like these might become playmates to children and a part of the family.

  “It’s hard to predict the future, but it won’t be too long before the computer in front of you will be a social robot. You’ll be able to talk to it, flirt with it, or even get angry and yell at it—and it will understand you and your emotions,” says Dr. Terrence Sejnowski of the Salk Institute, near San Diego. This is the easy part. The hard part is to gauge the response of the robot, given this information. If the owner is angry or displeased, the robot has to be able to factor this into its response.

  EMOTIONS: DETERMINING WHAT IS IMPORTANT

  What’s more, AI researchers have begun to realize that emotions may be a key to consciousness. Neuroscientists like Dr. Antonio Damasio have found that when the link between the prefrontal lobe (which governs rational thought) and the emotional centers (e.g., the limbic system) is damaged, patients cannot make value judgments. They are paralyzed when making the simplest of decisions (what things to buy, when to set an appointment, which color pen to use) because everything has the same value to them. Hence, emotions are not a luxury; they are absolutely essential, and without them a robot will have difficulty determining what is important and what is not. So emotions, instead of being peripheral to the progress of artificial intelligence, are now assuming central importance.

  If a robot encounters a raging fire, it might rescue the computer files first, not the people, since its programming might say that valuable documents cannot be replaced but workers always can be. It is crucial that robots be programmed to distinguish between what is important and what is not, and emotions are shortcuts the brain uses to rapidly determine this. Robots would thus have to be programmed to have a value system—that human life is more important than material objects, that children should be rescued first in an emergency, that objects with a higher price are more valuable than objects with a lower price, etc. Since robots do not come equipped with values, a huge list of value judgments must be uploaded into them.

  The problem with emotions, however, is that they are sometimes irrational, while robots are mathematically precise. So silicon consciousness may differ from human consciousness in key ways. For example, humans have little control over emotions, since they happen so rapidly and because they originate in the limbic system, not the prefrontal cortex of the brain. Furthermore, our emotions are often biased. Numerous tests have shown that we tend to overestimate the abilities of people who are handsome or pretty. Good-looking people tend to rise higher in society and have better jobs, although they may not be as talented as others. As the expression goes, “Beauty has its privileges.”

  Similarly, silicon consciousness may not take into account subtle cues that humans use when they meet one another, such as body language. When people enter a room, young people usually defer to older ones and low-ranked staff members show extra courtesy to senior officials. We show our deference in the way we move our bodies, our choice of words, and our gestures. Because body language is older than language itself, it is hardwired into the brain in subtle ways. Robots, if they are to interact socially with people, will have to learn these unconscious cues.

  Our consciousness is influenced by peculiarities in our evolutionary past, which robots will not have, so silicon consciousness may not have the same gaps or quirks as ours.

  A MENU OF EMOTIONS

  Since emotions have to be programmed into robots from the outside, manufacturers may offer a menu of emotions carefully chosen on the basis of whether they are necessary, useful, or will increase bonding with the owner.

  In all likelihood, robots will be programmed to have only a few human emotions, depending on the situation. Perhaps the emotion most valued by the robot’s owner will be loyalty. One wants a robot that faithfully carries out its commands without complaints, that understands the needs of the master and anticipates them. The last thing an owner will want is a robot with an attitude, one that talks back, criticizes people, and whines. Helpful criticisms are important, but they must be made in a constructive, tactful way. Also, if humans give it conflicting commands, the robot should know to ignore all of them except those coming from its owner.

  Empathy will be another emotion that will be valued by the owner. Robots that have empathy will understand the problems of others and will come to their aid. By interpreting facial movements and listening to tone of voice, robots will be able to identify when a person is in distress and will provide assistance when possible.

  Strangely, fear is another emotion that is desirable. Evolution gave us the feeling of fear for a reason, to avoid certain things that are dangerous to us. Even though robots will be made of steel, they should fear certain things that can damage them, like falling off tall buildings or entering a raging fire. A totally fearless robot is a useless one if it destroys itself.

  But certain emotions may have to be deleted, forbidden, or highly regulated, such as anger. Given that robots could be built to have great physical strength, an angry robot could create tremendous problems in the home and workplace. Anger could get in the way of its duties and cause great damage to property. (The original evolutionary purpose of anger was to show our dissatisfaction. This can be done in a rational, dispassionate way, without getting angry.)

  Another emotion that should be deleted is the desire to be in command. A bossy robot will only make trouble and might challenge the judgment and wishes of the owner. (This point will also be important later, when we discuss whether robots will one day take over from humans.) Hence the robot will have to defer to the wishes of the owner, even if this may not be the best path.

  But perhaps the most difficult emotion to convey is humor, which is a glue that can bond total strangers together. A simple joke can defuse a tense situation or inflame it. The basic mechanics of humor are simple: they involve a punch line that is unanticipated. But the subtleties of humor can be enormous. In fact, we often size up other people on the basis of how they react to certain jokes. If humans use humor as a gauge to measure other humans, then one can appreciate the difficulty of creating a robot that can tell if a joke is funny or not. President Ronald Reagan, for example, was famous for defusing the most difficult questions with a quip. In fact, he accumulated a large card catalog of jokes, barbs, and wisecracks, because he understood the power of humor. (Some pundits concluded that he won the presidential debate against Walter Mondale when he was asked if he was too old to be president. Reagan replied that he would not hold the youth of his opponent against him.) Also, laughing inappropriately could have disastrous consequences (and is, in fact, sometimes a sign of mental illness). The robot has to know the difference between laughing with or at someone. (Actors are well aware of the diverse nature of laughter. They are skilled enough to create laughter that can represent horror, cynicism, joy, anger, sadness, etc.) So, at least un
til the theory of artificial intelligence becomes more developed, robots should stay away from humor and laughter.

  PROGRAMMING EMOTIONS

  In this discussion we have so far avoided the difficult question of precisely how these emotions would be programmed into a computer. Because of their complexity, emotions will probably have to be programmed in stages.

  First, the easiest part is identifying an emotion by analyzing the gestures in a person’s face, lips, eyebrows, and tone of voice. Today’s facial recognition technology is already capable of creating a dictionary of emotions, so that certain facial expressions mean certain things. This process actually goes back to Charles Darwin, who spent a considerable amount of time cataloging emotions common to animals and humans.

  Second, the robot must respond rapidly to this emotion. This is also easy. If someone is laughing, the robot will grin. If someone is angry, the robot will get out of his way and avoid conflict. The robot would have a large encyclopedia of emotions programmed into it, and hence would know how to make a rapid response to each one.

  The third stage is perhaps the most complex because it involves trying to determine the underlying motivation behind the original emotion. This is difficult, since a variety of situations can trigger a single emotion. Laughter may mean that someone is happy, heard a joke, or watched someone fall. Or it might mean that a person is nervous, anxious, or insulting someone. Likewise, if someone is screaming, there may be an emergency, or perhaps someone is just reacting with joy and surprise. Determining the reason behind an emotion is a skill that even humans have difficulty with. To do this, the robot will have to list the various possible reasons behind an emotion and try to determine the reason that makes the most sense. This means trying to find a reason behind the emotion that fits the data best.

  And fourth, once the robot has determined the origin of this emotion, it has to make the appropriate response. This is also difficult, since there are often several possible responses, and the wrong one may make the situation worse. The robot already has, within its programming, a list of possible responses to the original emotion. It has to calculate which one will best serve the situation, which means simulating the future.

  WILL ROBOTS LIE?

  Normally, we might think of robots as being coldly analytical and rational, always telling the truth. But once robots become integrated into society, they will probably have to learn to lie or at least tactfully restrain their comments.

  In our own lives, several times in a typical day we are confronted with situations where we have to tell a white lie. If people ask us how they look, we often dare not tell the truth. White lies, in fact, are like a grease that makes society run smoothly. If we were suddenly forced to tell the whole truth (like Jim Carrey in Liar Liar), we most likely would wind up creating chaos and hurting people. People would be insulted if you told them what they really looked like or how you really felt. Bosses would fire you. Lovers would dump you. Friends would abandon you. Strangers would slap you. Some thoughts are better kept confidential.

  In the same way, robots may have to learn how to lie or conceal the truth, or else they might wind up offending people and being decommissioned by their owners. At a party, if a robot tells the truth, it could reflect badly on its owner and create an uproar. So if someone asks for its opinion, it will have to learn how to be evasive, diplomatic, and tactful. It must either dodge the question, change the subject, give platitudes for answers, reply with a question, or tell white lies (all things that today’s chat-bots are increasingly good at). This means that the robot has already been programmed to have a list of possible evasive responses, and must choose the one that creates the fewest complications.

  One of the few times that a robot would tell the entire truth would be if asked a direct question by its owner, who understands that the answer might be brutally honest. Perhaps the only other time when the robot will tell the truth is when there is a police investigation and the absolute truth is necessary. Other than that, robots will be able to freely lie or conceal the whole truth to keep the wheels of society functioning.

  In other words, robots have to be socialized, just like teenagers.

  CAN ROBOTS FEEL PAIN?

  Robots, in general, will be assigned to do types of tasks that are dull, dirty, and dangerous. There is no reason why robots can’t do repetitive or dirty jobs indefinitely, since we wouldn’t program them to feel boredom or disgust. The real problem emerges when robots are faced with dangerous jobs. At that point, we might actually want to program them to feel pain.

  We evolved the sense of pain because it helped us survive in a dangerous environment. There is a genetic defect in which children are born without the ability to feel pain. This is called congenital analgesia. At first glance, this may seem to be a blessing, since these children do not cry when they experience injury, but it is actually more of a curse. Children with this affliction have serious problems, such as biting off parts of their tongue, suffering severe skin burns, and cutting themselves, often leading to amputations of their fingers. Pain alerts us to danger, telling us when to move our hand away from the burning stove or to stop running on a twisted ankle.

  At some point robots must be programmed to feel pain, or else they will not know when to avoid precarious situations. The first sense of pain they must have is hunger (i.e., a craving for electrical energy). As their batteries run out, they will get more desperate and urgent, realizing that soon their circuits will shut down, leaving all their work in disarray. The closer they are to running out of power, the more anxious they will become.

  Also, regardless of how strong they are, robots may accidentally pick up an object that is too heavy, which could cause their limbs to break. Or they may suffer overheating by working with molten metal in a steel factory, or by entering a burning building to help firemen. Sensors for temperature and stress would alert them that their design specifications are being exceeded.

  But once the sensation of pain is added to their menu of emotions, this immediately raises ethical issues. Many people believe that we should not inflict unnecessary pain on animals, and people may feel the same about robots as well. This opens the door to robots’ rights. Laws may have to be passed to restrict the amount of pain and danger that a robot is allowed to face. People will not care if a robot is performing dull or dirty tasks, but if they feel pain doing a dangerous one, they may begin to lobby for laws to protect robots. This may even start a legal conflict, with owners and manufacturers of robots arguing for increasing the level of pain that robots can endure, while ethicists may argue for lowering it.

  This, in turn, may set off other ethical debates about other robot rights. Can robots own property? What happens if they accidentally hurt someone? Can they be sued or punished? Who is responsible in a lawsuit? Can a robot own another robot? This discussion raises another sticky question: Should robots be given a sense of ethics?

  ETHICAL ROBOTS

  At first, the idea of ethical robots seems like a waste of time and effort. However, this question takes on a sense of urgency when we realize that robots will make life-and-death decisions. Since they will be physically strong and have the capability of saving lives, they will have to make split-second ethical choices about whom to save first.

  Let’s say there is a catastrophic earthquake and children are trapped in a rapidly crumbling building. How should the robot allocate its energy? Should it try to save the largest number of children? Or the youngest? Or the most vulnerable? If the debris is too heavy, the robot may damage its electronics. So the robot has to decide yet another ethical question: How does it weigh the number of children it saves versus the amount of damage that it will sustain to its electronics?

  Without proper programming, the robot may simply halt, waiting for a human to make the final decision, wasting valuable time. So someone will have to program it ahead of time so that the robot automatically makes the “right” decision.

  These ethical decisions will have to be preprog
rammed into the computer from the start, since there is no law of mathematics that can put a value on saving a group of children. Within its programming, there has to be a long list of things, ranked in terms of how important they are. This is tedious business. In fact, it sometimes takes a human a lifetime to learn these ethical lessons, but a robot has to learn them rapidly, before it leaves the factory, if it is to safely enter society.

  Only people can do this, and even then ethical dilemmas sometimes confound us. But this raises questions: Who will make the decisions? Who decides the order in which robots save human lives?

  The question of how decisions will ultimately be made will probably be resolved via a combination of the law and the marketplace. Laws will have to be passed so that there is, at minimum, a ranking of importance of whom to save in an emergency. But beyond that, there are thousands of finer ethical questions. These subtler decisions may be decided by the marketplace and common sense.

  If you work for a security firm guarding important people, you will have to tell the robot how to save people in a precise order in different situations, based on considerations such as fulfilling the primary duty but also doing it within budget.