The Singularity Is Near: When Humans Transcend Biology
They are also potentially very fast. Peter Burke and his colleagues at the University of California at Irvine recently demonstrated nanotube circuits operating at 2.5 gigahertz (GHz). However, in Nano Letters, a peer-reviewed journal of the American Chemical Society, Burke says the theoretical speed limit for these nanotube transistors “should be terahertz [1 THz = 1,000 GHz], which is about 1,000 times faster than modern computer speeds.”8 One cubic inch of nanotube circuitry, once fully developed, would be up to one hundred million times more powerful than the human brain.9
Nanotube circuitry was controversial when I discussed it in 1999, but there has been dramatic progress in the technology over the past six years. Two major strides were made in 2001. A nanotube-based transistor (with dimensions of one by twenty nanometers), operating at room temperature and using only a single electron to switch between on and off states, was reported in the July 6, 2001, issue of Science.10 Around the same time, IBM also demonstrated an integrated circuit with one thousand nanotube-based transistors.11
More recently, we have seen the first working models of nanotube-based circuitry. In January 2004 researchers at the University of California at Berkeley and Stanford University created an integrated memory circuit based on nanotubes.12 One of the challenges in using this technology is that some nanotubes are conductive (that is, simply transmit electricity), while others act like semiconductors (that is, are capable of switching and able to implement logic gates). The difference in capability is based on subtle structural features. Until recently, sorting them out required manual operations, which would not be practical for building large-scale circuits. The Berkeley and Stanford scientists addressed this issue by developing a fully automated method of sorting and discarding the nonsemiconductor nanotubes.
Lining up nanotubes is another challenge with nanotube circuits, since they tend to grow in every direction. In 2001 IBM scientists demonstrated that nanotube transistors could be grown in bulk, similar to silicon transistors. They used a process called “constructive destruction,” which destroys defective nanotubes right on the wafer instead of sorting them out manually. Thomas Theis, director of physical sciences at IBM’s Thomas J. Watson Research Center, said at the time, “We believe that IBM has now passed a major milestone on the road toward molecular-scale chips. . . . If we are ultimately successful, then carbon nanotubes will enable us to indefinitely maintain Moore’s Law in terms of density, because there is very little doubt in my mind that these can be made smaller than any future silicon transistor.”13 In May 2003 Nantero, a small company in Woburn, Massachusetts, cofounded by Harvard University researcher Thomas Rueckes, took the process a step further when it demonstrated a single-chip wafer with ten billion nanotube junctions, all aligned in the proper direction. The Nantero technology involves using standard lithography equipment to remove automatically the nanotubes that are incorrectly aligned. Nantero’s use of standard equipment has excited industry observers because the technology would not require expensive new fabrication machines. The Nantero design provides random access as well as nonvolatility (data is retained when the power is off), meaning that it could potentially replace all of the primary forms of memory: RAM, flash, and disk.
Computing with Molecules. In addition to nanotubes, major progress has been made in recent years in computing with just one or a few molecules. The idea of computing with molecules was first suggested in the early 1970s by IBM’s Avi Aviram and Northwestern University’s Mark A. Ratner.14 At that time, we did not have the enabling technologies, which required concurrent advances in electronics, physics, chemistry, and even the reverse engineering of biological processes for the idea to gain traction.
In 2002 scientists at the University of Wisconsin and University of Basel created an “atomic memory drive” that uses atoms to emulate a hard drive. A single silicon atom could be added or removed from a block of twenty others using a scanning tunneling microscope. Using this process, researchers believe, the system could be used to store millions of times more data on a disk of comparable size—a density of about 250 terabits of data per square inch—although the demonstration involved only a small number of bits.15
The one-terahertz speed predicted by Peter Burke for molecular circuits looks increasingly accurate, given the nanoscale transistor created by scientists at the University of Illinois at Urbana-Champaign. It runs at a frequency of 604 gigahertz (more than half a terahertz).16
One type of molecule that researchers have found to have desirable properties for computing is called a “rotaxane,” which can switch states by changing the energy level of a ringlike structure contained within the molecule. Rotaxane memory and electronic switching devices have been demonstrated, and they show the potential of storing one hundred gigabits (1011 bits) per square inch. The potential would be even greater if organized in three dimensions.
Self-Assembly. Self-assembling of nanoscale circuits is another key enabling technique for effective nanoelectronics. Self-assembly allows improperly formed components to be discarded automatically and makes it possible for the potentially trillions of circuit components to organize themselves, rather than be painstakingly assembled in a top-down process. It would enable large-scale circuits to be created in test tubes rather than in multibillion-dollar factories, using chemistry rather than lithography, according to UCLA scientists.17 Purdue University researchers have already demonstrated self-organizing nanotube structures, using the same principle that causes DNA strands to link together in stable structures.18
Harvard University scientists took a key step forward in June 2004 when they demonstrated another self-organizing method that can be used on a large scale.19 The technique starts with photolithography to create an etched array of interconnects (connections between computational elements). A large number of nanowire field-effect transistors (a common form of transistors) and nanoscale interconnects are then deposited on the array. These then connect themselves in the correct pattern.
In 2004 researchers at the University of Southern California and NASA’s Ames Research Center demonstrated a method that self-organizes extremely dense circuits in a chemical solution.20 The technique creates nanowires spontaneously and then causes nanoscale memory cells, each able to hold three bits of data, to self-assemble onto the wires. The technology has a storage capacity of 258 gigabits of data per square inch (which researchers claim could be increased tenfold), compared to 6.5 gigabits on a flash memory card. Also in 2003 IBM demonstrated a working memory device using polymers that self-assemble into twenty-nanometer-wide hexagonal structures.21
It’s also important that nanocircuits be self-configuring. The large number of circuit components and their inherent fragility (due to their small size) make it inevitable that some portions of a circuit will not function correctly. It will not be economically feasible to discard an entire circuit simply because a small number of transistors out of a trillion are nonfunctioning. To address this concern, future circuits will continuously monitor their own performance and route information around sections that are unreliable in the same manner that information on the Internet is routed around nonfunctioning nodes. IBM has been particularly active in this area of research and has already developed microprocessor designs that automatically diagnose problems and reconfigure chip resources accordingly.22
Emulating Biology. The idea of building electronic or mechanical systems that are self-replicating and self-organizing is inspired by biology, which relies on these properties. Research published in the Proceedings of the National Academy of Sciences described the construction of self-replicating nanowires based on prions, which are self-replicating proteins. (As detailed in chapter 4, one form of prion appears to play a role in human memory, whereas another form is believed to be responsible for variant Creutzfeldt-Jakob disease, the human form of mad-cow disease.)23 The team involved in the project used prions as a model because of their natural strength. Because prions do not normally conduct electricity, however, the scientists created a genetic
ally modified version containing a thin layer of gold, which conducts electricity with low resistance. MIT biology professor Susan Lindquist, who headed the study, commented, “Most of the people working on nanocircuits are trying to build them using ‘top-down’ fabrication techniques. We thought we’d try a ‘bottom-up’ approach, and let molecular self-assembly do the hard work for us.”
The ultimate self-replicating molecule from biology is, of course, DNA. Duke University researchers created molecular building blocks called “tiles” out of self-assembling DNA molecules.24 They were able to control the structure of the resulting assembly, creating “nanogrids.” This technique automatically attaches protein molecules to each nanogrid’s cell, which could be used to perform computing operations. They also demonstrated a chemical process that coated the DNA nanoribbons with silver to create nanowires. Commenting on the article in the September 26, 2003, issue of the journal Science, lead researcher Hao Yan said, “To use DNA self-assembly to template protein molecules or other molecules has been sought for years, and this is the first time it has been demonstrated so clearly.”25
Computing with DNA. DNA is nature’s own nanoengineered computer, and its ability to store information and conduct logical manipulations at the molecular level has already been exploited in specialized “DNA computers.” A DNA computer is essentially a test tube filled with water containing trillions of DNA molecules, with each molecule acting as a computer.
The goal of the computation is to solve a problem, with the solution expressed as a sequence of symbols. (For example, the sequence of symbols could represent a mathematical proof or just the digits of a number.) Here’s how a DNA computer works. A small strand of DNA is created, using a unique code for each symbol. Each such strand is replicated trillions of times using a process called “polymerase chain reaction” (PCR). These pools of DNA are then put into a test tube. Because DNA has an affinity to link strands together, long strands form automatically, with sequences of the strands representing the different symbols, each of them a possible solution to the problem. Since there will be many trillions of such strands, there are multiple strands for each possible answer (that is, each possible sequence of symbols).
The next step of the process is to test all of the strands simultaneously. This is done by using specially designed enzymes that destroy strands that do not meet certain criteria. The enzymes are applied to the test tube sequentially, and by designing a precise series of enzymes the procedure will eventually obliterate all the incorrect strands, leaving only the ones with the correct answer. (For a more complete description of the process, see this note:26)
The key to the power of DNA computing is that it allows for testing each of the trillions of strands simultaneously. In 2003 Israeli scientists led by Ehud Shapiro at the Weizmann Institute of Science combined DNA with adenosine triphosphate (ATP), the natural fuel for biological systems such as the human body.27 With this method, each of the DNA molecules was able to perform computations as well as provide its own energy. The Weizmann scientists demonstrated a configuration consisting of two spoonfuls of this liquid supercomputing system, which contained thirty million billion molecular computers and performed a total of 660 trillion calculations per second (6.6 × 1014 cps). The energy consumption of these computers is extremely low, only fifty millionths of a watt for all thirty million billion computers.
There’s a limitation, however, to DNA computing: each of the many trillions of computers has to perform the same operation at the same time (although on different data), so that the device is a “single instruction multiple data” (SIMD) architecture. While there are important classes of problems that are amenable to a SIMD system (for example, processing every pixel in an image for image enhancement or compression, and solving combinatorial-logic problems), it is not possible to program them for general-purpose algorithms, in which each computer is able to execute whatever operation is needed for its particular mission. (Note that the research projects at Purdue University and Duke University, described earlier, that use self-assembling DNA strands to create three-dimensional structures are different from the DNA computing described here. Those research projects have the potential to create arbitrary configurations that are not limited to SIMD computing.)
Computing with Spin. In addition to their negative electrical charge, electrons have another property that can be exploited for memory and computation: spin. According to quantum mechanics, electrons spin on an axis, similar to the way the Earth rotates on its axis. This concept is theoretical, because an electron is considered to occupy a point in space, so it is difficult to imagine a point with no size that nonetheless spins. However, when an electrical charge moves, it causes a magnetic field, which is real and measurable. An electron can spin in one of two directions, described as “up” and “down,” so this property can be exploited for logic switching or to encode a bit of memory.
The exciting property of spintronics is that no energy is required to change an electron’s spin state. Stanford University physics professor Shoucheng Zhang and University of Tokyo professor Naoto Nagaosa put it this way: “We have discovered the equivalent of a new ‘Ohm’s Law’ [the electronics law that states that current in a wire equals voltage divided by resistance]. . . . [It] says that the spin of the electron can be transported without any loss of energy, or dissipation. Furthermore, this effect occurs at room temperature in materials already widely used in the semiconductor industry, such as gallium arsenide. That’s important because it could enable a new generation of computing devices.”28
The potential, then, is to achieve the efficiencies of superconducting (that is, moving information at or close to the speed of light without any loss of information) at room temperature. It also allows multiple properties of each electron to be used for computing, thereby increasing the potential for memory and computational density.
One form of spintronics is already familiar to computer users: magneto-resistance (a change in electrical resistance caused by a magnetic field) is used to store data on magnetic hard drives. An exciting new form of nonvolatile memory based on spintronics called MRAM (magnetic random-access memory) is expected to enter the market within a few years. Like hard drives, MRAM memory retains its data without power but uses no moving parts and will have speeds and rewritability comparable to conventional RAM.
MRAM stores information in ferromagnetic metallic alloys, which are suitable for data storage but not for the logical operations of a microprocessor. The holy grail of spintronics would be to achieve practical spintronics effects in a semiconductor, which would enable us to use the technology both for memory and for logic. Today’s chip manufacturing is based on silicon, which does not have the requisite magnetic properties. In March 2004 an international group of scientists reported that by doping a blend of silicon and iron with cobalt, the new material was able to display the magnetic properties needed for spintronics while still maintaining the crystalline structure silicon requires as a semiconductor.29
An important role for spintronics in the future of computer memory is clear, and it is likely to contribute to logic systems as well. The spin of an electron is a quantum property (subject to the laws of quantum mechanics), so perhaps the most important application of spintronics will be in quantum computing systems, using the spin of quantum-entangled electrons to represent qubits, which I discuss below.
Spin has also been used to store information in the nucleus of atoms, using the complex interaction of their protons’ magnetic moments. Scientists at the University of Oklahoma also demonstrated a “molecular photography” technique for storing 1,024 bits of information in a single liquid-crystal molecule comprising nineteen hydrogen atoms.30
Computing with Light. Another approach to SIMD computing is to use multiple beams of laser light in which information is encoded in each stream of photons. Optical components can then be used to perform logical and arithmetic functions on the encoded information streams. For example, a system developed by Lenslet, a
small Israeli company, uses 256 lasers and can perform eight trillion calculations per second by performing the same calculation on each of the 256 streams of data.31 The system can be used for applications such as performing data compression on 256 video channels.
SIMD technologies such as DNA computers and optical computers will have important specialized roles to play in the future of computation. The replication of certain aspects of the functionality of the human brain, such as processing sensory data, can use SIMD architectures. For other brain regions, such as those dealing with learning and reasoning, general-purpose computing with its “multiple instruction multiple data” (MIMD) architectures will be required. For high-performance MIMD computing, we will need to apply the three-dimensional molecular-computing paradigms described above.
Quantum Computing. Quantum computing is an even more radical form of SIMD parallel processing, but one that is in a much earlier stage of development compared to the other new technologies we have discussed. A quantum computer contains a series of qubits, which essentially are zero and one at the same time. The qubit is based on the fundamental ambiguity inherent in quantum mechanics. In a quantum computer, the qubits are represented by a quantum property of particles—for example, the spin state of individual electrons. When the qubits are in an “entangled” state, each one is simultaneously in both states. In a process called “quantum decoherence” the ambiguity of each qubit is resolved, leaving an unambiguous sequence of ones and zeroes. If the quantum computer is set up in the right way, that decohered sequence will represent the solution to a problem. Essentially, only the correct sequence survives the process of decoherence.