Once a computer achieves a human level of intelligence, it will necessarily roar past it. Since their inception, computers have significantly exceeded human mental dexterity in their ability to remember and process information. A computer can remember billions or even trillions of facts perfectly, while we are hard pressed to remember a handful of phone numbers. A computer can quickly search a database with billions of records in fractions of a second. Computers can readily share their knowledge bases. The combination of human-level intelligence in a machine with a computer’s inherent superiority in the speed, accuracy, and sharing ability of its memory will be formidable.

  Mammalian neurons are marvelous creations, but we wouldn’t build them the same way. Much of their complexity is devoted to supporting their own life processes, not to their information-handling abilities. Furthermore, neurons are extremely slow; electronic circuits are at least a million times faster. Once a computer achieves a human level of ability in understanding abstract concepts, recognizing patterns, and other attributes of human intelligence, it will be able to apply this ability to a knowledge base of all human-acquired—and machine-acquired—knowledge.

  A common reaction to the proposition that computers will seriously compete with human intelligence is to dismiss this specter based primarily on an examination of contemporary capability. After all, when I interact with my personal computer, its intelligence seems limited and brittle, if it appears intelligent at all. It is hard to imagine one’s personal computer having a sense of humor, holding an opinion, or displaying any of the other endearing qualities of human thought.

  But the state of the art in computer technology is anything but static. Computer capabilities are emerging today that were considered impossible one or two decades ago. Examples include the ability to transcribe accurately normal continuous human speech, to understand and respond intelligently to natural language, to recognize patterns in medical procedures such as electrocardiograms and blood tests with an accuracy rivaling that of human physicians, and, of course, to play chess at a world-championship level. In the next decade, we will see translating telephones that provide real-time speech translation from one human language to another, intelligent computerized personal assistants that can converse and rapidly search and understand the world’s knowledge bases, and a profusion of other machines with increasingly broad and flexible intelligence.

  In the second decade of the next century, it will become increasingly difficult to draw any clear distinction between the capabilities of human and machine intelligence. The advantages of computer intelligence in terms of speed, accuracy, and capacity will be clear. The advantages of human intelligence, on the other hand, will become increasingly difficult to distinguish.

  The skills of computer software are already better than many people realize. It is frequently my experience that when demonstrating recent advances in, say, speech or character recognition, observers are surprised at the state of the art. For example, a typical computer user’s last experience with speech-recognition technology may have been a low-end freely bundled piece of software from several years ago that recognized a limited vocabulary, required pauses between words, and did an incorrect job at that. These users are then surprised to see contemporary systems that can recognize fully continuous speech on a 60,000-word vocabulary, with accuracy levels comparable to a human typist.

  Also keep in mind that the progression of computer intelligence will sneak up on us. As just one example, consider Gary Kasparov’s confidence in 1990 that a computer would never come close to defeating him. After all, he had played the best computers, and their chess-playing ability—compared to his—was pathetic. But computer chess playing made steady progress, gaining forty-five rating points each year. In 1997, a computer sailed past Kasparov, at least in chess. There has been a great deal of commentary that other human endeavors are far more difficult to emulate than chess playing. This is true. In many areas—the ability to write a book on computers, for example—computers are still pathetic. But as computers continue to gain in capacity at an exponential rate, we will have the same experience in these other areas that Kasparov had in chess. Over the next several decades, machine competence will rival—and ultimately surpass—any particular human skill one cares to cite, including our marvelous ability to place our ideas in a broad diversity of contexts.

  Evolution has been seen as a billion-year drama that led inexorably to its grandest creation: human intelligence. The emergence in the early twenty-first century of a new form of intelligence on Earth that can compete with, and ultimately significantly exceed, human intelligence will be a development of greater import than any of the events that have shaped human history. It will be no less important than the creation of the intelligence that created it, and will have profound implications for all aspects of human endeavor, including the nature of work, human learning, government, warfare, the arts, and our concept of ourselves.

  This specter is not yet here. But with the emergence of computers that truly rival and exceed the human brain in complexity will come a corresponding ability of machines to understand and respond to abstractions and subtleties. Human beings appear to be complex in part because of our competing internal goals. Values and emotions represent goals that often conflict with each other, and are an unavoidable by-product of the levels of abstraction that we deal with as human beings. As computers achieve a comparable—and greater—level of complexity, and as they are increasingly derived at least in part from models of human intelligence, they, too, will necessarily utilize goals with implicit values and emotions, although not necessarily the same values and emotions that humans exhibit.

  A variety of philosophical issues will emerge. Are computers thinking, or are they just calculating? Conversely, are human beings thinking, or are they just calculating? The human brain presumably follows the laws of physics, so it must be a machine, albeit a very complex one. Is there an inherent difference between human thinking and machine thinking? To pose the question another way, once computers are as complex as the human brain, and can match the human brain in subtlety and complexity of thought, are we to consider them conscious? This is a difficult question even to pose, and some philosophers believe it is not a meaningful question; others believe it is the only meaningful question in philosophy. This question actually goes back to Plato’s time, but with the emergence of machines that genuinely appear to possess volition and emotion, the issue will become increasingly compelling.

  For example, if a person scans his brain through a noninvasive scanning technology of the twenty-first century (such as an advanced magnetic resonance imaging), and downloads his mind to his personal computer, is the “person” who emerges in the machine the same consciousness as the person who was scanned? That “person” may convincingly implore you that “he” grew up in Brooklyn, went to college in Massachusetts, walked into a scanner here, and woke up in the machine there. The original person who was scanned, on the other hand, will acknowledge that the person in the machine does indeed appear to share his history, knowledge, memory, and personality, but is otherwise an impostor, a different person.

  Even if we limit our discussion to computers that are not directly derived from a particular human brain, they will increasingly appear to have their own personalities, evidencing reactions that we can only label as emotions and articulating their own goals and purposes. They will appear to have their own free will. They will claim to have spiritual experiences. And people—those still using carbon-based neurons or otherwise—will believe them.

  One often reads predictions of the next several decades discussing a variety of demographic, economic, and political trends that largely ignore the revolutionary impact of machines with their own opinions and agendas. Yet we need to reflect on the implications of the gradual, yet inevitable, emergence of true competition to the full range of human thought in order to comprehend the world that lies ahead.

  PART ONE

  PROBING THE PAST

 
CHAPTER ONE

  THE LAW OF TIME AND CHAOS

  A (VERY BRIEF) HISTORY OF THE UNIVERSE: TIME SLOWING DOWN

  The universe is made of stories, not of atoms.

  —Muriel Rukeyser

  Is the universe a great mechanism, a great computation, a great symmetry, a great accident or a great thought?

  —John D. Barrow

  As we start at the beginning, we will notice an unusual attribute of the nature of time, one that is critical to our passage to the twenty-first century. Our story begins perhaps 15 billion years ago. No conscious life existed to appreciate the birth of our Universe at the time, but we appreciate it now, so retroactively it did happen. (In retrospect—from one perspective of quantum mechanics—we could say that any Universe that fails to evolve conscious life to apprehend its existence never existed in the first place.)

  It was not until 10-43 seconds (a tenth of a millionth of a trillionth of a trillionth of a trillionth of a second) after the birth of the Universe1 that the situation had cooled off sufficiently (to 100 million trillion trillion degrees) that a distinct force—gravity—evolved.

  Not much happened for another 10-34 seconds (this is also a very tiny fraction of a second, but it is a billion times longer than 10-43 seconds), at which point an even cooler Universe (now only a billion billion billion degrees) allowed the emergence of matter in the form of electrons and quarks. To keep things balanced, antimatter appeared as well. It was an eventful time, as new forces evolved at a rapid rate. We were now up to three: gravity, the strong force,2 and the electroweak force.3

  After another 10-10 seconds (a tenth of a billionth of a second), the electroweak force split into the electromagnetic and weak forces4 we know so well today.

  Things got complicated after another 10-5 seconds (ten millionths of a second). With the temperature now down to a relatively balmy trillion degrees, the quarks came together to form protons and neutrons. The antiquarks did the. same, forming antiprotons.

  Somehow, the matter particles achieved a slight edge. How this happened is not entirely clear. Up until then, everything had seemed, so, well, even. But had everything stayed evenly balanced, it would have been a rather boring Universe. For one thing, life never would have evolved, and thus we could conclude that the Universe would never have existed in the first place.

  For every 10 billion antiprotons, the Universe contained 10 billion and 1 protons. The protons and antiprotons collided, causing the emergence of another important phenomenon: light (photons). Thus, almost all of the antimatter was destroyed, leaving matter as dominant. (This shows you the danger of allowing a competitor to achieve even a slight advantage.)

  Of course, had antimatter won, its descendants would have called it matter and would have called matter antimatter, so we would be back where we started (perhaps that is what happened).

  After another second (a second is a very long time compared to some of the earlier chapters in the Universe’s history, so notice how the time frames are growing exponentially larger), the electrons and antielectrons (called positrons) followed the lead of the protons and antiprotons and similarly annihilated each other, leaving mostly the electrons.

  After another minute, the neutrons and protons began coalescing into heavier nuclei, such as helium, lithium, and heavy forms of hydrogen. The temperature was now only a billion degrees.

  About 300,000 years later (things are slowing down now rather quickly), with the average temperature now only 3,000 degrees, the first atoms were created as the nuclei took control of nearby electrons.

  After a billion years, these atoms formed large clouds that gradually swirled into galaxies.

  After another two billion years, the matter within the galaxies coalesced further into distinct stars, many with their own solar systems.

  Three billion years later, circling an unexceptional star on the arm of a common galaxy, an unremarkable planet we call the Earth was born.

  Now before we go any further, let’s notice a striking feature of the passage of time. Events moved quickly at the beginning of the Universe’s history We had three paradigm shifts in just the first billionth of a second. Later on, events of cosmological significance took billions of years. The nature of time is that it inherently moves in an exponential fashion—either geometrically gaining in speed, or, as in the history of our Universe, geometrically slowing down. Time only seems to be linear during those eons in which not much happens. Thus most of the time, the linear passage of time is a reasonable approximation of its passage. But that’s not the inherent nature of time.

  Why is this significant? It’s not when you’re stuck in the eons in which not much happens. But it is of great significance when you find yourself in the “knee of the curve,” those periods in which the exponential nature of the curve of time explodes either inwardly or outwardly. It’s like falling into a black hole (in that case, time accelerates exponentially faster as one falls in).

  The Speed of Time

  But wait a second, how can we say that time is changing its “speed”? We can talk about the rate of a process, in terms of its progress per second, but can we say that time is changing its rate? Can time start moving at, say, two seconds per second?

  Einstein said exactly this—time is relative to the entities experiencing it.5 One man’s second can be another woman’s forty years. Einstein gives the example of a man who travels at very close to the speed of light to a star—say, twenty light-years away. From our Earth-bound perspective, the trip takes slightly more than twenty years in each direction. When the man gets back, his wife has aged forty years. For him, however, the trip was rather brief. If he travels at close enough to the speed of light, it may have only taken a second or less (from a practical perspective we would have to consider some limitations, such as the time to accelerate and decelerate without crushing his body). Whose time frame is the correct one? Einstein says they are both correct, and exist only relative to each other.

  Certain species of birds have a life span of only several years. If you observe their rapid movements, it appears that they are experiencing the passage of time on a different scale. We experience this in our own lives. A young child’s rate of change and experience of time is different from that of an adult. Of particular note, we will see that the acceleration in the passage of time for evolution is moving in a different direction than that for the Universe from which it emerges.

  It is in the nature of exponential growth that events develop extremely slowly for extremely long periods of time, but as one glides through the knee of the curve, events erupt at an increasingly furious pace. And that is what we will experience as we enter the twenty-first century.

  EVOLUTION: TIME SPEEDING UP

  In the beginning was the word.... And the word became flesh.

  —John 1:1,14

  A great deal of the universe does not need any explanation. Elephants, for instance. Once molecules have learnt to compete and create other molecules in their own image, elephants, and things resembling elephants, will in due course be found roaming through the countryside.

  —Peter Atkins

  The further backward you look, the further forward you can see.

  —Winston Churchill

  We’ll come back to the knee of the curve, but let’s delve further into the exponential nature of time. In the nineteenth century, a set of unifying principles called the laws of thermodynamics6 was postulated. As the name implies, they deal with the dynamic nature of heat and were the first major refinement of the laws of classical mechanics perfected by Isaac Newton a century earlier. Whereas Newton had described a world of clockwork perfection in which particles and objects of all sizes followed highly disciplined, predictable patterns, the laws of thermodynamics describe a world of chaos. Indeed, that is what heat is. Heat is the chaotic—unpredictable—movement of the particles that make up the world. A corollary of the second law of thermodynamics is that in a closed system (interacting entities and forces not subject to outside influenc
e; for example, the Universe), disorder (called “entropy”) increases. Thus, left to its own devices, a system such as the world we live in becomes increasingly chaotic. Many people find this describes their lives rather well. But in the nineteenth century, the laws of thermodynamics were considered a disturbing discovery. At the beginning of that century, it appeared that the basic principles governing the world were both understood and orderly. There were a few details left to be filled in, but the basic picture was under control. Thermodynamics was the first contradiction to this complacent picture. It would not be the last.

  The second law of thermodynamics, sometimes called the Law of Increasing Entropy, would seem to imply that the natural emergence of intelligence is impossible. Intelligent behavior is the opposite of random behavior, and any system capable of intelligent responses to its environment needs to be highly ordered. The chemistry of life, particularly of intelligent life, is comprised of exceptionally intricate designs. Out of the increasingly chaotic swirl of particles and energy in the world, extraordinary designs somehow emerged. How do we reconcile the emergence of intelligent life with the Law of Increasing Entropy?

  There are two answers here. First, while the Law of Increasing Entropy would appear to contradict the thrust of evolution, which is toward increasingly elaborate order, the two phenomena are not inherently contradictory The order of life takes place amid great chaos, and the existence of life-forms does not appreciably affect the measure of entropy in the larger system in which life has evolved. An organism is not a closed system. It is part of a larger system we call the environment, which remains high in entropy. In other words, the order represented by the existence of life-forms is insignificant in terms of measuring overall entropy.