Doubling (or Halving) Times 33
Moore’s Law: Self-Fulfilling Prophecy?
Some observers have stated that Moore’s Law is nothing more than a self-fulfilling prophecy: that industry participants anticipate where they need to be at particular times in the future, and organize their research and development accordingly. The industry’s own written road map is a good example of this.34 However, the exponential trends in information technology are far broader than those covered by Moore’s Law. We see the same types of trends in essentially every technology or measurement that deals with information. This includes many technologies in which a perception of accelerating price-performance does not exist or has not previously been articulated (see below). Even within computing itself, the growth in capability per unit cost is much broader than what Moore’s Law alone would predict.
The Fifth Paradigm 35
Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century (see the figure below).
The five paradigms of exponential growth of computing: Each time one paradigm has run out of steam, another has picked up the pace.
As the figure demonstrates, there were actually four different paradigms—electromechanical, relays, vacuum tubes, and discrete transistors—that showed exponential growth in the price-performance of computing long before integrated circuits were even invented. And Moore’s paradigm won’t be the last. When Moore’s Law reaches the end of its S-curve, now expected before 2020, the exponential growth will continue with three-dimensional molecular computing, which will constitute the sixth paradigm.
Fractal Dimensions and the Brain
Note that the use of the third dimension in computing systems is not an either-or choice but a continuum between two and three dimensions. In terms of biological intelligence, the human cortex is actually rather flat, with only six thin layers that are elaborately folded, an architecture that greatly increases the surface area. This folding is one way to use the third dimension. In “fractal” systems (systems in which a drawing replacement or folding rule is iteratively applied), structures that are elaborately folded are considered to constitute a partial dimension. From that perspective, the convoluted surface of the human cortex represents a number of dimensions in between two and three. Other brain structures, such as the cerebellum, are three-dimensional but comprise a repeating structure that is essentially two-dimensional. It is likely that our future computational systems will also combine systems that are highly folded two-dimensional systems with fully three-dimensional structures.
Notice that the figure shows an exponential curve on a logarithmic scale, indicating two levels of exponential growth.36 In other words, there is a gentle but unmistakable exponential growth in the rate of exponential growth. (A straight line on a logarithmic scale shows simple exponential growth; an upwardly curving line shows higher-than-simple exponential growth.) As you can see, it took three years to double the price-performance of computing at the beginning of the twentieth century and two years in the middle, and it takes about one year currently.37
Hans Moravec provides the following similar chart (see the figure below), which uses a different but overlapping set of historical computers and plots trend lines (slopes) at different points in time. As with the figure above, the slope increases with time, reflecting the second level of exponential growth.38
If we project these computational performance trends through this next century, we can see in the figure below that supercomputers will match human brain capability by the end of this decade and personal computing will achieve it by around 2020—or possibly sooner, depending on how conservative an estimate of human brain capacity we use. (We’ll discuss estimates of human brain computational speed in the next chapter.)39
The exponential growth of computing is a marvelous quantitative example of the exponentially growing returns from an evolutionary process. We can express the exponential growth of computing in terms of its accelerating pace: it took ninety years to achieve the first MIPS per thousand dollars; now we add one MIPS per thousand dollars every five hours.40
IBM’s Blue Gene/P supercomputer is planned to have one million gigaflops (billions of floating-point operations per second), or 1015 calculations per second when it launches in 2007.41 That’s one tenth of the 1016 calculations per second needed to emulate the human brain (see the next chapter). And if we extrapolate this exponential curve, we get 1016 calculations per second early in the next decade.
As discussed above, Moore’s Law narrowly refers to the number of transistors on an integrated circuit of fixed size and sometimes has been expressed even more narrowly in terms of transistor feature size. But the most appropriate measure to track price-performance is computational speed per unit cost, an index that takes into account many levels of “cleverness” (innovation, which is to say, technological evolution). In addition to all of the invention involved in integrated circuits, there are multiple layers of improvement in computer design (for example, pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others).
The human brain uses a very inefficient electrochemical, digital-controlled analog computational process. The bulk of its calculations are carried out in the interneuronal connections at a speed of only about two hundred calculations per second (in each connection), which is at least one million times slower than contemporary electronic circuits. But the brain gains its prodigious powers from its extremely parallel organization in three dimensions. There are many technologies in the wings that will build circuitry in three dimensions, which I discuss in the next chapter.
We might ask whether there are inherent limits to the capacity of matter and energy to support computational processes. This is an important issue, but as we will see in the next chapter, we won’t approach those limits until late in this century. It is important to distinguish between the S-curve that is characteristic of any specific technological paradigm and the continuing exponential growth that is characteristic of the ongoing evolutionary process within a broad area of technology, such as computation. Specific paradigms, such as Moore’s Law, do ultimately reach levels at which exponential growth is no longer feasible. But the growth of computation supersedes any of its underlying paradigms and is for present purposes an ongoing exponential.
In accordance with the law of accelerating returns, paradigm shift (also called innovation) turns the S-curve of any specific paradigm into a continuing exponential. A new paradigm, such as three-dimensional circuits, takes over when the old paradigm approaches its natural limit, which has already happened at least four times in the history of computation. In such nonhuman species as apes, the mastery of a toolmaking or -using skill by each animal is characterized by an S-shaped learning curve that ends abruptly; human-created technology, in contrast, has followed an exponential pattern of growth and acceleration since its inception.
DNA Sequencing, Memory, Communications, the Internet, and
Miniaturization
Civilization advances by extending the number of important operations which we can perform without thinking about them.
—ALFRED NORTH WHITEHEAD, 191142
Things are more like they are now than they ever were before.
—DWIGHT D. EISENHOWER
The law of accelerating returns applies to all of technology, indeed to any evolutionary process. It can be charted with remarkable precision in information-based technologies because we have well-defined indexes (for example, calculations per second per dollar, or calculations per second per gram) to measure them. There are a great many examples of the exponential growth implied by the law of accelerating returns, in areas as varied as electronics of all kinds, DNA sequencing, communications, brain scanning, brain reverse engineering, the size and scope of human knowledge, and t
he rapidly shrinking size of technology. The latter trend is directly related to the emergence of nanotechnology.
The future GNR (Genetics, Nanotechnology, Robotics) age (see chapter 5) will come about not from the exponential explosion of computation alone but rather from the interplay and myriad synergies that will result from multiple intertwined technological advances. As every point on the exponential-growth curves underlying this panoply of technologies represents an intense human drama of innovation and competition, we must consider it remarkable that these chaotic processes result in such smooth and predictable exponential trends. This is not a coincidence but is an inherent feature of evolutionary processes.
When the human-genome scan got under way in 1990 critics pointed out that given the speed with which the genome could then be scanned, it would take thousands of years to finish the project. Yet the fifteen-year project was completed slightly ahead of schedule, with a first draft in 2003.43 The cost of DNA sequencing came down from about ten dollars per base pair in 1990 to a couple of pennies in 2004 and is rapidly continuing to fall (see the figure below).44
There has been smooth exponential growth in the amount of DNA-sequence data that has been collected (see the figure below).45 A dramatic recent example of this improving capacity was the sequencing of the SARS virus, which took only thirty-one days from the identification of the virus, compared to more than fifteen years for the HIV virus.46
Of course, we expect to see exponential growth in electronic memories such as RAM. But note how the trend on this logarithmic graph (below) proceeds smoothly through different technology paradigms: vacuum tube to discrete transistor to integrated circuit.47
Exponential growth in RAM capacity across paradigm shifts.
However, growth in the price-performance of magnetic (disk-drive) memory is not a result of Moore’s Law. This exponential trend reflects the squeezing of data onto a magnetic substrate, rather than transistors onto an integrated circuit, a completely different technical challenge pursued by different engineers and different companies.48
Exponential growth in communications technology (measures for communicating information; see the figure below) has for many years been even more explosive than in processing or memory measures of computation and is no less significant in its implications. Again, this progression involves far more than just shrinking transistors on an integrated circuit but includes accelerating advances in fiber optics, optical switching, electromagnetic technologies, and other factors.49
We are currently moving away from the tangle of wires in our cities and in our daily lives through wireless communication, the power of which is doubling every ten to eleven months (see the figure below).
The figures below show the overall growth of the Internet based on the number of hosts (Web-server computers). These two charts plot the same data, but one is on a logarithmic axis and the other is linear. As has been discussed, while technology progresses exponentially, we experience it in the linear domain. From the perspective of most observers, nothing was happening in this area until the mid-1990s, when seemingly out of nowhere the World Wide Web and e-mail exploded into view. But the emergence of the Internet into a worldwide phenomenon was readily predictable by examining exponential trend data in the early 1980s from the ARPANET, predecessor to the Internet.50
This figure shows the same data on a linear scale.51
The explosion of the Internet appears to be a surprise from the linear chart but was perfectly predictable from the logarithmic one.
In addition to servers, the actual data traffic on the Internet has also doubled every year.52
To accommodate this exponential growth, the data transmission speed of the Internet backbone (as represented by the fastest announced backbone communication channels actually used for the Internet) has itself grown exponentially. Note that in the figure “Internet Backbone Bandwidth” below, we can actually see the progression of S-curves: the acceleration fostered by a new paradigm, followed by a leveling off as the paradigm runs out of steam, followed by renewed acceleration through paradigm shift.53
Another trend that will have profound implications for the twenty-first century is the pervasive movement toward miniaturization. The key feature sizes of a broad range of technologies, both electronic and mechanical, are decreasing, and at an exponential rate. At present, we are shrinking technology by a factor of about four per linear dimension per decade. This miniaturization is a driving force behind Moore’s Law, but it’s also reflected in the size of all electronic systems—for example, magnetic storage. We also see this decrease in the size of mechanical devices, as the figure on the size of mechanical devices illustrates.54
As the salient feature size of a wide range of technologies moves inexorably closer to the multinanometer range (less than one hundred nanometers—billionths of a meter), it has been accompanied by a rapidly growing interest in nanotechnology. Nanotechnology science citations have been increasing significantly over the past decade, as noted in the figure below.55
We see the same phenomenon in nanotechnology-related patents (below).56
As we will explore in chapter 5, the genetics (or biotechnology) revolution is bringing the information revolution, with its exponentially increasing capacity and price-performance, to the field of biology. Similarly, the nanotechnology revolution will bring the rapidly increasing mastery of information to materials and mechanical systems. The robotics (or “strong AI”) revolution involves the reverse engineering of the human brain, which means coming to understand human intelligence in information terms and then combining the resulting insights with increasingly powerful computational platforms. Thus, all three of the overlapping transformations—genetics, nanotechnology, and robotics—that will dominate the first half of this century represent different facets of the information revolution.
Information, Order, and Evolution:
The Insights from Wolfram and Fredkin’s Cellular Automata
* * *
As I’ve described in this chapter, every aspect of information and information technology is growing at an exponential pace. Inherent in our expectation of a Singularity taking place in human history is the pervasive importance of information to the future of human experience. We see information at every level of existence. Every form of human knowledge and artistic expression—scientific and engineering ideas and designs, literature, music, pictures, movies—can be expressed as digital information.
Our brains also operate digitally, through discrete firings of our neurons. The wiring of our interneuronal connections can be digitally described, and the design of our brains is specified by a surprisingly small digital genetic code.57
Indeed, all of biology operates through linear sequences of 2-bit DNA base pairs, which in turn control the sequencing of only twenty amino acids in proteins. Molecules form discrete arrangements of atoms. The carbon atom, with its four positions for establishing molecular connections, is particularly adept at creating a variety of three-dimensional shapes, which accounts for its central role in both biology and technology. Within the atom, electrons take on discrete energy levels. Other subatomic particles, such as protons, comprise discrete numbers of valence quarks.
Although the formulas of quantum mechanics are expressed in terms of both continuous fields and discrete levels, we do know that continuous levels can be expressed to any desired degree of accuracy using binary data.58 In fact, quantum mechanics, as the word “quantum” implies, is based on discrete values.
Physicist-mathematician Stephen Wolfram provides extensive evidence to show how increasing complexity can originate from a universe that is at its core a deterministic, algorithmic system (a system based on fixed rules with predetermined outcomes). In his book A New Kind of Science, Wolfram offers a comprehensive analysis of how the processes underlying a mathematical construction called “a cellular automaton” have the potential to describe every level of our natural world.59 (A cellular automaton is a simple computational mechanism that, for e
xample, changes the color of each cell on a grid based on the color of adjacent or nearby cells according to a transformation rule.)
In his view, it is feasible to express all information processes in terms of operations on cellular automata, so Wolfram’s insights bear on several key issues related to information and its pervasiveness. Wolfram postulates that the universe itself is a giant cellular-automaton computer. In his hypothesis there is a digital basis for apparently analog phenomena (such as motion and time) and for formulas in physics, and we can model our understanding of physics as the simple transformations of a cellular automaton.
Others have proposed this possibility. Richard Feynman wondered about it in considering the relationship of information to matter and energy. Norbert Wiener heralded a fundamental change in focus from energy to information in his 1948 book Cybernetics and suggested that the transformation of information, not energy, was the fundamental building block of the universe.60 Perhaps the first to postulate that the universe is being computed on a digital computer was Konrad Zuse in 1967.61 Zuse is best known as the inventor of the first working programmable computer, which he developed from 1935 to 1941.