The Age of Spiritual Machines: When Computers Exceed Human Intelligence
The advantage of an optical computer is that it is massively parallel with potentially trillions of simultaneous calculations. Its disadvantage is that it is not programmable and performs a fixed set of calculations for a given configuration of optical computing elements. But for important classes of problems such as recognizing patterns, it combines massive parallelism (a quality shared by the human brain) with extremely high speed (which the human brain lacks).
Computing with the Machinery of Life
A new field called molecular computing has sprung up to harness the DNA molecule itself as a practical computing device. DNA is nature’s own nanoengineered computer and it is well suited for solving combinatorial problems. Combining attributes is, after all, the essence of genetics. Applying actual DNA to practical computing applications got its start when Leonard Adleman, a University of Southern California mathematician, coaxed a test tube full of DNA molecules (see the box on page 108) to solve the well-known “traveling salesperson” problem. In this classic problem, we try to find an optimal route for a hypothetical traveler between multiple cities without having to visit a city more than once. Only certain city pairs are connected by routes, so finding the right path is not straightforward. It is an ideal problem for a recursive algorithm, although if the number of cities is too large, even a very fast recursive search will take far too long.
Professor Adleman and other scientists in the molecular-computing field have identified a set of enzyme reactions that corresponds to the logical and arithmetic operations needed to solve a variety of computing problems. Although DNA molecular operations produce occasional errors, the number of DNA strands being used is so large that any molecular errors become statistically insignificant. Thus, despite the inherent error rate in DNA’s computing and copying processes, a DNA computer can be highly reliable if properly designed.
DNA computers have subsequently been applied to a range of difficult combinatorial problems. A DNA computer is more flexible than an optical computer but it is still limited to the technique of applying massive parallel search by assembling combinations of elements.10
There is another, more powerful way to apply the computing power of DNA that has not yet been explored. I present it below in the section on quantum computing.
HOW TO SOLVE THE RAVELING-SALESPERSON PROBLEM USING A TEST TUBE OF DNA USING A TEST TUBE OF DNA
One of DNA’s advantageous properties is its ability to replicate itself, and the information it contains. To solve the traveling-salesperson problem, Professor Adleman performed following steps:
• Generate small strand of DNA with a unique code for each city.
• Replicate each such strand (one for each city) trillions of times using a process called “polymerase chain reaction” (PCR).
• Next, put the pools of DNA (one for each city) together in a test tube. This step uses DNA’s affinity to link strands together. Longer strands will form automatically. Each such longer srand represents a possible route of multiple cities. The samall strands representing each city link up with one another in a random fashion, so there is no mathematical certainly that a linked strand representing the correct answer (sequence of cities) will be formed. However, the number of strands is so vast that it is virtu- formed. However, the number of strands is so vast that it is virtually certain that at least one strand-and probably millions-will be formed that represent the correct answer.
The next steps use specially designed enzymes to eliminate the trillions of strands that represent the wrong answer, leaving only the strands representing the correct answer:
• Use molecules called primers to destroy those DNA strands that do not start with the start city as well as those that do not end with the end city, and replicate these surviving strands (using PCR).
• Use an enzyme reaction to eliminate those DNA strands that represent a travel path greater than the total number of cities.
• Use an enzyme reaction to destroy those strands that do not include the first city. Repeat for each of the cities.
• Now, each of the surviving strands represents the correct answer. Replicate these surviving strands (using PCR) until there
• Using a technique called electrophoresis, read out the DNA sequence of these correct strands (as a group). The readout looks like a set of distinct lines, which specifies the correct sequence of cities.
The Brain in the Crystal
Another approach contemplates growing a computer as a crystal directly in three dimensions, with computing elements being the size of large molecules within the crystalline lattice. This is another approach to harnessing the third dimension.
Stanford Professor Lambertus Hesselink has described a system in which data is stored in a crystal as a hologram—an optical interference pattern.11 This three-dimensional storage method requires only a million atoms for each bit and thus could achieve a trillion bits of storage for each cubic centimeter. Other projects hope to harness the regular molecular structure of crystals as actual computing elements.
The Nanotube: A Variation of Buckyballs
Three professors—Richard Smalley and Robert Curl of Rice University, and Harold Kroto of the University of Sussex—shared the 1996 Nobel Prize in Chemistry for their 1985 discovery of soccer-ball-shaped molecules formed of a large number of carbon atoms. Organized in hexagonal and pentagonal patterns like R. Buckminster Fuller’s building designs, they were dubbed “buckyballs.” These unusual molecules, which form naturally in the hot fumes of a furnace, are extremely strong—ahundred times stronger than steel—a property they share with Fuller’s architectural innovations.12
More recently, Dr. Sumio lijima of Nippon Electric Company showed that in addition to the spherical buckyballs, the vapor from carbon arc lamps also contained elongated carbon molecules that looked like long tubes.13 Called nanotubes because of their extremely small size—fifty thousand of them side by side would equal the thickness of one human hair—they are formed of the same pentagonal patterns of carbon atoms as buckyballs and share the buckyball’s unusual strength.
What is most remarkable about the nanotube is that it can perform the electronic functions of silicon-based components. If a nanotube is straight, it conducts electricity as well as or better than a metal conductor. If a slight helical twist is introduced, the nanotube begins to act like a transistor. The full range of electronic devices can be built using nanotubes.
Since a nanotube is essentially a sheet of graphite that is only one atom thick, it is vastly smaller than the silicon transistors on an integrated chip. Although extremely small, they are far more durable than silicon devices. Moreover, they handle heat much better than silicon and thus can be assembled into three-dimensional arrays more easily than silicon transistors. Dr. Alex Zettl, a physics professor at the University of California at Berkeley, envisions three-dimensional arrays of nanotube-based computing elements similar to—but far denser and faster than—the human brain.
QUANTUM COMPUTING: THE UNIVERSE IN A CUP
Quantum particles are the dreams that stuff is made of.
—David Moser
So far we have been talking about mere digital computing. There is actually a more powerful approach called quantum computing. It promises the ability to solve problems that even massively parallel digital computers cannot solve. Quantum computers harness a paradoxical result of quantum mechanics. Actually, I am being redundant—all results of quantum mechanics are paradoxical.
Note that the Law of Accelerating Returns and other projections in this book do not rely on quantum computing. The projections in this book are based on readily measurable trends and are not relying on discontinuities in technological progress that nonetheless occurred in the twentieth century. There will inevitably be technological discontinuities in the twenty-first century, and quantum computing would certainly qualify.
What is quantum computing? Digital computing is based on “bits” of information which are either off or on—zero or one. Bits are organized into la
rger structures such as numbers, letters, and words, which in turn can represent virtually any form of information: text, sounds, pictures, moving images. Quantum computing, on the other hand, is based on qu-bits (pronounced cue-bits), which essentially are zero and one at the same time. The qu-bit is based on the fundamental ambiguity inherent in quantum mechanics. The position, momentum, or other state of a fundamental particle remains “ambiguous” until a process of disambiguation causes that particle to “decide” where it is, where it has been, and what properties it has. For example, consider a stream of photons that strike a sheet of glass at a 45-degree angle. As each photon strikes the glass, it has a choice of traveling either straight through the glass or reflecting off the glass. Each photon will actually take both paths (actually more than this, see below) until a process of conscious observation forces each particle to decide which path it took. This behavior has been extensively confirmed in numerous contemporary experiments.
In a quantum computer, the qu-bits would be represented by a property—nuclear spin is a popular choice—of individual electrons. If set up in the proper way, the electrons will not have decided the direction of their nuclear spin (up or down) and thus will be in both states at the same time. The process of conscious observation of the electrons’ spin states—or any subsequent phenomena dependent on a determination of these states—causes the ambiguity to be resolved. This process of disambiguation is called quantum decoherence. If it weren’t for quantum decoherence, the world we live in would be a baffling place indeed.
The key to the quantum computer is that we would present it with a problem, along with a way to test the answer. We would set up the quantum decoherence of the qu-bits in such a way that only an answer that passes the test survives the decoherence. The failing answers essentially cancel each other out. As with a number of other approaches (for example, recursive and genetic algorithms), one of the keys to quantum computing is, therefore, a careful statement of the problem, including a precise way to test possible answers.
The series of qu-bits represents simultaneously every possible solution to the problem. A single qu-bit represents two possible solutions. Two linked qu-bits represent four possible answers. A quantum computer with 1,000 qu-bits represents 21,000 (this is approximately equal to a decimal number consisting of 1, followed by 301 zeroes) possible solutions simultaneously. The statement of the problem—expressed as a test to be applied to potential answers—is presented to the string of qu-bits so that the qu-bits decohere (that is, each qu-bit changes from its ambiguous 0-1 state to an actual 0 or a 1), leaving a series of 0’s and 1’s that pass the test. Essentially all 21,000 possible solutions have been tried simultaneously, leaving only the correct solution.
This process of reading out the answer through quantum decoherence is obviously the key to quantum computing. It is also the most difficult aspect to grasp. Consider the following analogy. Beginning physics students learn that if light strikes a mirror at an angle, it will bounce off the mirror in the opposite direction and at the same angle to the surface. But according to quantum theory, that is not what is happening. Each photon actually bounces off every possible point on the mirror, essentially trying out every possible path. The vast majority of these paths cancel each other out, leaving only the path that classical physics predicts. Think of the mirror as representing a problem to be solved. Only the correct solution—light bounced off at an angle equal to the incoming angle—survives all of the quantum cancellations. A quantum computer works the same way. The test of the correctness of the answer to the problem is set up in such a way that the vast majority of the possible answers—those that do not pass the test—cancel each other out, leaving only the sequence of bits that does pass the test. An ordinary mirror, therefore, can be thought of as a special example of a quantum computer, albeit one that solves a rather simple problem.
As a more useful example, encryption codes are based on factoring large numbers (factoring means determining which smaller numbers, when multiplied together, result in the larger number). Factoring a number with several hundred bits is virtually impossible on any digital computer even if we had billions of years to wait for the answer. A quantum computer can try every possible combination of factors simultaneously and break the code in less than a billionth of a second (communicating the answer to human observers does take a bit longer). The test applied by the quantum computer during its key disambiguation stage is very simple: just multiply one factor by the other and if the result equals the encryption code, then we have solved the problem.
It has been said that quantum computing is to digital computing as a hydrogen bomb is to a firecracker. This is a remarkable statement when we consider that digital computing is quite revolutionary in its own right. the analogy is based on the following observation. Consider (at least in theory) a Universe-sized (nonquantum) computer in which every neutron, electron, and proton in the Universe is turned into a computer, and each one (that is, every particle in the Universe) is able to compute trillions of calculations per second. Now imagine certain problems that this Universe-sized supercomputer would be unable to solve even if we ran that computer until either the next big bang or until all the stars in the Universe died—about ten to thirty billion years. There are many examples of such massively intractable problems; for example, cracking encryption codes that use a thousand bits, or solving the traveling-salesman problem with a thousand cities. While very massive digital computing (including our theoretical Universe-sized computer) is unable to solve this class of problems, a quantum computer of microscopic size could solve such problems in less than a billionth of a second.
Are quantum computers feasible? Recent advances, both theoretical and practical, suggest that the answer is yes. Although a practical quantum computer has not been built, the means for harnessing the requisite decoherence has been demonstrated. Isaac Chuang of Los Alamos National Laboratory and MIT’s Neil Gershenfeld have actually built a quantum computer using the carbon atoms in the alanine molecule. Their quantum computer was only able to add one and one, but that’s a start. We have, of course, been relying on practical applications of other quantum effects, such as the electron tunneling in transistors, for decades.14
A Quantum Computer in a Cup of Coffee
One of the difficulties in designing a practical quantum computer is that it needs to be extremely small, basically atom or molecule sized, to harness the delicate quantum effects. But it is very difficult to keep individual atoms and molecules from moving around due to thermal effects. Moreover, individual molecules are generally too unstable to build a reliable machine. For these problems, Chuang and Gershenfeld have come up with a theoretical breakthrough. Their solution is to take a cup of liquid and consider every molecule to be a quantum computer. Now instead of a single unstable molecule-sized quantum computer, they have a cup with about a hundred billion trillion quantum computers. The point here is not more massive parallelism, but rather massive redundancy In this way, the inevitably erratic behavior of some of the molecules has no effect on the statistical behavior of all the molecules in the liquid. This approach of using the statistical behavior of trillions of molecules to overcome the lack of reliability of a single molecule is similar to Professor Adleman’s use of trillions of DNA strands to overcome the comparable issue in DNA computing.
This approach to quantum computing also solves the problem of reading out the answer bit by bit without causing those qu-bits that have not yet been read to decohere prematurely Chuang and Gershenfeld subject their liquid computer to radio-wave pulses, which cause the molecules to respond with signals indicating the spin state of each electron. Each pulse does cause some unwanted decoherence, but, again, this decoherence does not affect the statistical behavior of trillions of molecules. In this way, the quantum effects become stable and reliable.
Chuang and Gershenfeld are currently building a quantum computer that can factor small numbers. Although this early model will not compete with conventional digital comp
uters, it will be an important demonstration of the feasibility of quantum computing. Apparently high on their list for a suitable quantum liquid is freshly brewed Java coffee, which, Gershenfeld notes, has “unusually even heating characteristics.”
Quantum Computing with the Code of Life
Quantum computing starts to overtake digital computing when we can link at least 40 qu-bits. A 40-qu-bit quantum computer would be evaluating a trillion possible solutions simultaneously, which would match the fastest supercomputers. At 60 bits, we would be doing a million trillion simultaneous trials. When we get to hundreds of qu-bits, the capabilities of a quantum computer would vastly overpower any conceivable digital computer.
So here’s my idea. The power of a quantum computer depends on the number of qu-bits that we can link together. We need to find a large molecule that is specifically designed to hold large amounts of information. Evolution has designed just such a molecule: DNA. We can readily create any sized DNA molecule we wish from a few dozen nucleotide rungs to thousands. So once again we combine two elegant ideas—in this case the liquid-DNA computer and the liquid-quantum computer—to come up with a solution greater than the sum of its parts. By putting trillions of DNA molecules in a cup, there is the potential to build a highly redundant—and therefore reliable—quantum computer with as many qu-bits as we care to harness. Remember you read it here first.