But there is only so far we can go with this approach. Our DNA-based cells depend on protein synthesis, and while protein is a marvelously diverse substance, it suffers from severe limitations. Hans Moravec, one of the first serious thinkers to realize the potential of twenty-first-century machines, points out that “protein is not an ideal material. It is stable only in a narrow temperature and pressure range, is very sensitive to radiation, and rules out many construction techniques and components.... A genetically engineered superhuman would be just a second-rate kind of robot, designed under the handicap that its construction can only be by DNA-guided protein synthesis. Only in the eyes of human chauvinists would it have an advantage.”3

  One of evolution’s ideas that is worth keeping, however, is building our bodies from cells. This approach would retain many of our bodies’ beneficial qualities: redundancy, which provides a high degree of reliability; the ability to regenerate and repair itself; and softness and warmth. But just as we will eventually relinquish the extremely slow speed of our neurons, we will ultimately be forced to abandon the other restrictions of our protein-based chemistry. To rein-vent our cells, we look to one of the twenty-first century’s primary technologies: nanotechnology.

  NANOTECHNOLOGY: REBUILDING THE WORLD, ATOM BY ATOM

  The problems of chemistry and biology can be greatly helped if... doing things on an atomic level is ultimately developed—a development which I think cannot be avoided.

  —Richard Feynman, 1959

  Suppose someone claimed to have a microscopically exact replica (in marble, even) of Michelangelo’s David in his home. When you go to see this marvel, you find a twenty-foot-tall, roughly rectilinear hunk of pure white marble standing in his living room. “I haven’t gotten around to unpacking it yet,” he says, “but I know it’s in there.”

  —Douglas Hofstadter

  What advantages will nanotoasters have over conventional macroscopic toaster technology? First, the savings in counter space will be substantial. One philosophical point that must not be overlooked is that the creation of the world’s smallest toaster implies the existence of the world’s smallest slice of bread. In the quantum limit we must necessarily encounter fundamental toast particles, which we designate here as “croutons.”

  —Jim Cser, Annals of Improbable Research, edited by Marc Abrahams

  Humankind’s first tools were found objects: sticks used to dig up roots and stones used to break open nuts. It took our forebears tens of thousands of years to invent a sharp blade. Today we build machines with finely designed intricate mechanisms, but viewed on an atomic scale, our technology is still crude. “Casting, grinding, milling, and even lithography move atoms in great thundering statistical herds,” says Ralph Merkle, a leading nanotechnology theorist at Xerox’s Palo Alto Research Center. He adds that current manufacturing methods are “like trying to make things out of Legos with boxing gloves on.... In the future, nanotechnology will let us take off the boxing gloves.”4

  Nanotechnology is technology built on the atomic level: building machines one atom at a time. “Nano” refers to a billionth of a meter, which is the width of five carbon atoms. We have one existence proof of the feasibility of nanotechnology: life on Earth. Little machines in our cells called ribosomes build organisms such as humans one molecule, that is one amino acid, at a time, following digital templates coded in another molecule called DNA. Life on Earth has mastered the ultimate goal of nanotechnology, which is self-replication.

  But as mentioned above, Earthly life is limited by the particular molecular building block it has selected. Just as our human-created computational technology will ultimately exceed the capacity of natural computation (electronic circuits are already millions of times faster than human neural circuits), our twenty-first-century physical technology will also greatly exceed the capabilities of the amino acid—based nanotechnology of the natural world.

  The concept of building machines atom by atom was first described in a 1959 talk at Cal Tech titled “There’s Plenty of Room at the Bottom,” by physicist Richard Feynman, the same guy who first suggested the possibility of quantum computing.5 The idea was developed in some detail by Eric Drexler twenty years later in his book Engines of Creation.6 The book actually inspired the cryonics movement of the 1980s, in which people had their heads (with or without bodies) frozen in the hope that a future time would possess the molecule-scale technology to overcome their mortal diseases, as well as undo the effects of freezing and defrosting. Whether a future generation would be motivated to revive all these frozen brains was another matter.

  After publication of Engines of Creation, the response to Drexler’s ideas was skeptical and he had difficulty filling out his MIT Ph.D. committee despite Marvin Minsky’s agreement to supervise it. Drexler’s dissertation, published in 1992 as a book titled Nanosystems: Molecular Machinery, Manufacturing, and Computation, provided a comprehensive proof of concept, including detailed analyses and specific designs.7 A year later, the first nanotechnology conference attracted only a few dozen researchers. The fifth annual conference, held in December 1997, boasted 350 scientists who were far more confident of the practicality of their tiny projects. Nanothinc, an industry think tank, estimated in 1997 that the field already produces $5 billion in annual revenues for nanotechnology-related technologies, including micromachines, microfabrication techniques, nanolithography, nanoscale microscopes, and others. This figure has been more than doubling each year.8

  The Age of Nanotubes

  One key building material for tiny machines is, again, nanotubes. Although built on an atomic scale, the hexagonal patterns of carbon atoms are extremely strong and durable. “You can do anything you damn well want with these tubes and they’ll just keep on truckin’,” says Richard Smalley, one of the chemists who received the Nobel Prize for discovering the buckyball molecule.9 A car made of nanotubes would be stronger and more stable than a car made with steel, but would weigh only fifty pounds. A spacecraft made of nanotubes could be of the size and strength of the U.S. space shuttle, but weigh no more than a conventional car. Nanotubes handle heat extremely well, far better than the fragile amino acids that people are built out of. They can be assembled into all kinds of shapes: wirelike strands, sturdy girders, gears, etcetera. Nanotubes are formed of carbon atoms, which are in plentiful supply in the natural world.

  As I mentioned earlier, the same nanotubes can be used for extremely efficient computation, so both the structural and computational technology of the twenty-first century will likely be constructed from the same stuff. In fact, the same nanotubes used to form physical structures can also be used for computation, so future nanomachines can have their brains distributed throughout their bodies.

  The best-known examples of nanotechnology to date, while not altogether practical, are beginning to show the feasibility of engineering at the atomic level. IBM created its corporate logo using individual atoms as pixels.10 In 1996, Texas Instruments built a chip-sized device with half a million moveable mirrors to be used in a tiny high-resolution projector.11 TI sold $100 million worth of their nanomirrors in 1997.

  Chih-Ming Ho of UCLA is designing flying machines using surfaces covered with microflaps that control the flow of air in a similar manner to conventional flaps on a normal airplane.12 Andrew Berlin at Xerox’s Palo Alto Research Center is designing a printer using microscopic air valves to move paper documents precisely.13

  Cornell graduate student and rock musician Dustin Carr built a realistic-looking but microscopic guitar with strings only fifty nanometers in diameter. Carr’s creation is a fully functional musical instrument, but his fingers are too large to play it. Besides, the strings vibrate at 10 million vibrations per second, far beyond the twenty-thousand-cycles-per-second limit of human hearing. 14

  The Holy Grail of Self-Replication: Little Fingers and a Little Intelligence

  Tiny fingers represent something of a holy grail for nanotechnologists. With little fingers and computation, nanomachi
nes would have in their Lilliputian world what people have in the big world: intelligence and the ability to manipulate their environment. Then these little machines could build replicas of themselves, achieving the field’s key objective.

  The reason that self-replication is important is that it is too expensive to build these tiny machines one at a time. To be effective, nanometer-sized machines need to come in the trillions. The only way to achieve this economically is through combinatorial explosion: let the machines build themselves.

  Drexler, Merkle (a coinventor of public key encryption, the primary method of encrypting messages), and others have convincingly described how such a self-replicating nanorobot—nanobot—could be constructed. The trick is to provide the nanobot with sufficiently flexible manipulators—arms and hands—so that it is capable of building a copy of itself. It needs some means for mobility so that it can find the requisite raw materials. It requires some intelligence so that it can solve the little problems that will arise when each nanobot goes about building a complicated little machine like itself. Finally, a really important requirement is that it needs to know when to stop replicating.

  Morphing in the Real World

  Self-replicating machines built at the atomic level could truly transform the world we live in. They could build extremely inexpensive solar cells, allowing the replacement of messy fossil fuels. Since solar cells require a large surface area to collect sufficient sunlight, they could be placed in orbit, with the energy beamed down to Earth.

  Nanobots launched into our bloodstreams could supplement our natural immune system and seek out and destroy pathogens, cancer cells, arterial plaque, and other disease agents. In the vision that inspired the cryonics enthusiasts, diseased organs can be rebuilt. We will be able to reconstruct any or all of our bodily organs and systems, and do so at the cellular level. I talked in the last chapter about reverse engineering and emulating the salient computational functionality of human neurons. In the same way, it will become possible to reverse engineer and replicate the physical and chemical functionality of any human cell. In the process we will be in a position to greatly extend the durability, strength, temperature range, and other qualities and capabilities of our cellular building blocks.

  We will then be able to grow stronger, more capable organs by redesigning the cells that constitute them and building them with far more versatile and durable materials. As we go down this road, we’ll find that some redesign of the body makes sense at multiple levels. For example, if our cells are no longer vulnerable to the conventional pathogens, we may not need the same kind of immune system. But we will need new nanoengineered protections for a new assortment of nanopathogens.

  Food, clothing, diamond rings, buildings could all assemble themselves molecule by molecule. Any sort of product could be instantly created when and where we need it. Indeed, the world could continually reassemble itself to meet our changing needs, desires; and fantasies. By the late twenty-first century, nanotechnology will permit objects such as furniture, buildings, clothing, even people,. to change their appearance and other characteristics—essentially to change into something else—in a split second.

  These technologies will emerge gradually (I will attempt to delineate the different gradations of nanotechnology as I talk about each of the decades of the twenty-first century in Part III of this book). There is a clear incentive to go down this path. Given a choice, people will prefer to keep their bones from crumbling, their skin supple, their life systems strong and vital. Improving our lives through neural implants on the mental level, and nanotechnology-enhanced bodies on the physical level, will be popular and compelling. It is another one of those slippery slopes—there is no obvious place to stop this progression until the human race has largely replaced the brains and bodies that evolution first provided.

  A Clear and Future Danger

  Without self-replication, nanotechnology is neither practical nor economically feasible. And therein lies the rub. What happens if a little software problem (inadvertent or otherwise) fails to halt the self-replication? We may have more nanobots than we want. They could eat up everything in sight.

  The movie The Blob (of which there are two versions) was a vision of nanotechnology run amok. The movie’s villain was this intelligent self-replicating gluttonous stuff that fed on organic matter. Recall that nanotechnology is likely to be built from carbon-based nanotubes, so, like the Blob, it will build itself from organic matter, which is rich in carbon. Unlike mere animal-based cancers, an exponentially exploding nanomachine population would feed on any carbon-based matter. Tracking down all of these bad nanointelligences would be like trying to find trillions of microscopic needles—rapidly moving ones at that—in at least as many haystacks. There have been proposals for nanoscale immunity technologies: good little antibody machines that would go after the bad little machines. The nanoantibodies would, of course, have to scale up at least as quickly as the epidemic of marauding nanomiscreants. There could be a lot of collateral damage as these trillions of machines battle it out.

  Now that I have raised this specter, I will try, unconvincingly perhaps, to put the peril in perspective. I believe that it will be possible to engineer self-replicating nanobots in such a way that an inadvertent, undesired population explosion would be unlikely. I realize that this may not be completely reassuring, coming from a software developer whose products (like those of my competitors) crash once in a while (but rarely—and when they do, it’s the fault of the operating system!). There is a concept in software development of “mission critical” applications. These are software programs that control a process on which people are heavily dependent. Examples of mission-critical software include life-support systems in hospitals, automated surgical equipment, autopilot flying and landing systems, and other software-based systems that affect the well-being of a person or organization. It is feasible to create extremely high levels of reliability in these programs. There are examples of complex technology in use today in which a mishap would severely imperil public safety. A conventional explosion in an atomic power plant could spray deadly plutonium across heavily populated areas. Despite a near meltdown at Chernobyl, this apparently has only occurred twice in the decades that we have had hundreds of such plants operating, both incidents involving recently acknowledged reactor calamities in the Chelyabinsk region of Russia.15 There are tens of thousands of nuclear weapons, and none has ever exploded in error.

  I admit that the above paragraph is not entirely convincing. But the bigger danger is the intentional hostile use of nanotechnology. Once the basic technology is available, it would not be difficult to adapt it as an instrument of war or terrorism. It is not the case that someone would have to be suicidal to use such weapons. The nanoweapons could easily be programmed to replicate only against an enemy; for example, only in a particular geographical area. Nuclear weapons, for all their destructive potential, are at least relatively local in their effects. The self-replicating nature of nanotechnology makes it a far greater danger.

  VIRTUAL BODIES

  We don’t always need real bodies. If we happen to be in a virtual environment, then a virtual body will do just fine. Virtual reality started with the concept of computer games, particularly ones that provided a simulated environment. The first was Space War, written by early artificial-intelligence researchers to pass the time while waiting for programs to compile on their slow 1960s computers.16 The synthetic space surroundings were easy to render on low-resolution monitors: Stars and other space objects were just illuminated pixels.

  Computer games and computerized video games have become more realistic over time, but you cannot completely immerse yourself in these imagined worlds, not without some imagination. For one thing, you can see the edges of the screen, and the all too real world that you have never left is still visible beyond these borders.

  If we’re going to enter a new world, we had better get rid of traces of the old. In the 1990s the first generation of virtual reality
has been introduced in which you don a special visual helmet that takes over your entire visual field. The key to visual reality is that when you move your head, the scene instantly repositions itself so that you are now looking at a different region of a three-dimensional scene. The intention is to simulate what happens when you turn your real head in the real world: The images captured by your retinas rapidly change. Your brain nonetheless understands that the world has remained stationary and that the image is sliding across your retinas only because your head is rotating.

  Like most first generation technologies, virtual reality has not been fully convincing. Because rendering a new scene requires a lot of computation, there is a lag in producing the new perspective. Any noticeable delay tips off your brain that the world you’re looking at is not entirely real. The resolution of virtual reality displays has also been inadequate to create a fully satisfactory illusion. Finally, contemporary virtual reality helmets are bulky and uncomfortable.

  What’s needed to remove the rendering delay and to boost display resolution is yet faster computers, which we know are always on the way. By 2007, high-quality virtual reality with convincing artificial environments, virtually instantaneous rendering, and high-definition displays will be comfortable to wear and available at computer game prices.