* * *
A machine is as distinctively and brilliantly and expressively human as a violin sonata or a theorem in Euclid.
—GREGORY VLASTOS
It is a far cry from the monkish calligrapher, working in his cell in silence, to the brisk “click, click” of the modern writing machine, which in a quarter of a century has revolutionized and reformed business.
—SCIENTIFIC AMERICAN, 1905
No communication technology has ever disappeared, but instead becomes increasingly less important as the technological horizon widens.
—ARTHUR C. CLARKE
I always keep a stack of books on my desk that I leaf through when I run out of ideas, feel restless, or otherwise need a shot of inspiration. Picking up a fat volume that I recently acquired, I consider the bookmaker’s craft: 470 finely printed pages organized into 16-page signatures, all of which are sewn together with white thread and glued onto a gray canvas cord. The hard linen-bound covers, stamped with gold letters, are connected to the signature block by delicately embossed end sheets. This is a technology that was perfected many decades ago. Books constitute such an integral element of our society—both reflecting and shaping its culture—that it is hard to imagine life without them. But the printed book, like any other technology, will not live forever.
The Life Cycle of a Technology
We can identify seven distinct stages in the life cycle of a technology.
During the precursor stage, the prerequisites of a technology exist, and dreamers may contemplate these elements coming together. We do not, however, regard dreaming to be the same as inventing, even if the dreams are written down. Leonardo da Vinci drew convincing pictures of airplanes and automobiles, but he is not considered to have invented either.
The next stage, one highly celebrated in our culture, is invention, a very brief stage, similar in some respects to the process of birth after an extended period of labor. Here the inventor blends curiosity, scientific skills, determination, and usually a measure of showmanship to combine methods in a new way and brings a new technology to life.
The next stage is development, during which the invention is protected and supported by doting guardians (who may include the original inventor). Often this stage is more crucial than invention and may involve additional creation that can have greater significance than the invention itself. Many tinkerers had constructed finely hand-tuned horseless carriages, but it was Henry Ford’s innovation of mass production that enabled the automobile to take root and flourish.
The fourth stage is maturity. Although continuing to evolve, the technology now has a life of its own and has become an established part of the community. It may become so interwoven in the fabric of life that it appears to many observers that it will last forever. This creates an interesting drama when the next stage arrives, which I call the stage of the false pretenders.
Here an upstart threatens to eclipse the older technology. Its enthusiasts prematurely predict victory. While providing some distinct benefits, the newer technology is found on reflection to be lacking some key element of functionality or quality. When it indeed fails to dislodge the established order, the technology conservatives take this as evidence that the original approach will indeed live forever.
This is usually a short-lived victory for the aging technology. Shortly thereafter, another new technology typically does succeed in rendering the original technology to the stage of obsolescence. In this part of the life cycle, the technology lives out its senior years in gradual decline, its original purpose and functionality now subsumed by a more spry competitor.
In this stage, which may comprise 5 to 10 percent of a technology’s life cycle, it finally yields to antiquity (as did the horse and buggy, the harpsichord, the vinyl record, and the manual typewriter).
In the mid-nineteenth century there were several precursors to the phonograph, including Léon Scott de Martinville’s phonautograph, a device that recorded sound vibrations as a printed pattern. It was Thomas Edison, however, who brought all of the elements together and invented the first device that could both record and reproduce sound in 1877. Further refinements were necessary for the phonograph to become commercially viable. It became a fully mature technology in 1949 when Columbia introduced the 33-rpm long-playing record (LP) and RCA Victor introduced the 45-rpm disc. The false pretender was the cassette tape, introduced in the 1960s and popularized during the 1970s. Early enthusiasts predicted that its small size and ability to be rerecorded would make the relatively bulky and scratchable record obsolete.
Despite these obvious benefits, cassettes lack random access and are prone to their own forms of distortion and lack of fidelity. The compact disc (CD) delivered the mortal blow. With the CD providing both random access and a level of quality close to the limits of the human auditory system, the phonograph record quickly entered the stage of obsolescence. Although still produced, the technology that Edison gave birth to almost 130 years ago has now reached antiquity.
Consider the piano, an area of technology that I have been personally involved with replicating. In the early eighteenth century Bartolommeo Cristofori was seeking a way to provide a touch response to the then-popular harpsichord so that the volume of the notes would vary with the intensity of the touch of the performer. Called gravicembalo col piano e forte (“harpsichord with soft and loud”), his invention was not an immediate success. Further refinements, including Stein’s Viennese action and Zumpe’s English action, helped to establish the “piano” as the preeminent keyboard instrument. It reached maturity with the development of the complete cast-iron frame, patented in 1825 by Alpheus Babcock, and has seen only subtle refinements since then. The false pretender was the electric piano of the early 1980s. It offered substantially greater functionality. Compared to the single (piano) sound of the acoustic piano, the electronic variant offered dozens of instrument sounds, sequencers that allowed the user to play an entire orchestra at once, automated accompaniment, educational programs to teach keyboard skills, and many other features. The only feature it was missing was a good-quality piano sound.
This crucial flaw and the resulting failure of the first generation of electronic pianos led to the widespread conclusion that the piano would never be replaced by electronics. But the “victory” of the acoustic piano will not be permanent. With their far greater range of features and price-performance, digital pianos already exceed the sales of acoustic pianos in homes. Many observers feel that the quality of the “piano” sound on digital pianos now equals or exceeds that of the upright acoustic piano. With the exception of concert and luxury grand pianos (a small part of the market), the sale of acoustic pianos is in decline.
From Goat Skins to Downloads
So where in the technology life cycle is the book? Among its precursors were Mesopotamian clay tablets and Egyptian papyrus scrolls. In the second century B.C., the Ptolemies of Egypt created a great library of scrolls at Alexandria and outlawed the export of papyrus to discourage competition.
What were perhaps the first books were created by Eumenes II, ruler of ancient Greek Pergamum, using pages of vellum made from the skins of goats and sheep, which were sewn together between wooden covers. This technique enabled Eumenes to compile a library equal to that of Alexandria. Around the same time, the Chinese had also developed a crude form of book made from bamboo strips.
The development and maturation of books has involved three great advances. Printing, first experimented with by the Chinese in the eighth century A.D. using raised wood blocks, allowed books to be reproduced in much larger quantities, expanding their audience beyond government and religious leaders. Of even greater significance was the advent of movable type, which the Chinese and Koreans experimented with by the eleventh century, but the complexity of Asian characters prevented these early attempts from being fully successful. Johannes Gutenberg, working in the fifteenth century, benefited from the relative simplicity of the Roman character set. He produced his Bible, the first la
rge-scale work printed entirely with movable type, in 1455.
While there has been a continual stream of evolutionary improvements in the mechanical and electromechanical process of printing, the technology of bookmaking did not see another qualitative leap until the availability of computer typesetting, which did away with movable type about two decades ago. Typography is now regarded as a part of digital image processing.
With books a fully mature technology, the false pretenders arrived about twenty years ago with the first wave of “electronic books.” As is usually the case, these false pretenders offered dramatic qualitative and quantitative benefits. CD-ROM- or flash memory–based electronic books can provide the equivalent of thousands of books with powerful computer-based search and knowledge navigation features. With Web-or CD-ROM- and DVD-based encyclopedias, I can perform rapid word searches using extensive logic rules, something that is just not possible with the thirty-three-volume “book” version I possess. Electronic books can provide pictures that are animated and that respond to our input. Pages are not necessarily ordered sequentially but can be explored along more intuitive connections.
As with the phonograph record and the piano, this first generation of false pretenders was (and still is) missing an essential quality of the original, which in this case is the superb visual characteristics of paper and ink. Paper does not flicker, whereas the typical computer screen is displaying sixty or more fields per second. This is a problem because of an evolutionary adaptation of the primate visual system. We are able to see only a very small portion of the visual field with high resolution. This portion, imaged by the fovea in the retina, is focused on an area about the size of a single word at twenty-two inches away. Outside of the fovea, we have very little resolution but exquisite sensitivity to changes in brightness, an ability that allowed our primitive forebears to quickly detect a predator that might be attacking. The constant flicker of a video graphics array (VGA) computer screen is detected by our eyes as motion and causes constant movement of the fovea. This substantially slows down reading speeds, which is one reason that reading on a screen is less pleasant than reading a printed book. This particular issue has been solved with flat-panel displays, which do not flicker.
Other crucial issues include contrast—a good-quality book has an ink-to-paper contrast of about 120:1; typical screens are perhaps half of that—and resolution. Print and illustrations in a book represent a resolution of about 600 to 1000 dots per inch (dpi), while computer screens are about one tenth of that.
The size and weight of computerized devices are approaching those of books, but the devices still are heavier than a paperback book. Paper books also do not run out of battery power.
Most important, there is the matter of the available software, by which I mean the enormous installed base of print books. Fifty thousand new print books are published each year in the United States, and millions of books are already in circulation. There are major efforts under way to scan and digitize print materials, but it will be a long time before the electronic databases have a comparable wealth of material. The biggest obstacle here is the understandable hesitation of publishers to make the electronic versions of their books available, given the devastating effect that illegal file sharing has had on the music-recording industry.
Solutions are emerging to each of these limitations. New, inexpensive display technologies have contrast, resolution, lack of flicker, and viewing angle comparable to high-quality paper documents. Fuel-cell power for portable electronics is being introduced, which will keep electronic devices powered for hundreds of hours between fuel-cartridge changes. Portable electronic devices are already comparable to the size and weight of a book. The primary issue is going to be finding secure means of making electronic information available. This is a fundamental concern for every level of our economy. Everything—including physical products, once nanotechnology-based manufacturing becomes a reality in about twenty years—is becoming information.
Moore’s Law and Beyond
Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1.5 tons.
—POPULAR MECHANICS, 1949
Computer Science is no more about computers than astronomy is about telescopes.
—E. W. DIJKSTRA
Before considering further the implications of the Singularity, let’s examine the wide range of technologies that are subject to the law of accelerating returns. The exponential trend that has gained the greatest public recognition has become known as Moore’s Law. In the mid-1970s, Gordon Moore, a leading inventor of integrated circuits and later chairman of Intel, observed that we could squeeze twice as many transistors onto an integrated circuit every twenty-four months (in the mid-1960s, he had estimated twelve months). Given that the electrons would consequently have less distance to travel, circuits would also run faster, providing an additional boost to overall computational power. The result is exponential growth in the price-performance of computation. This doubling rate—about twelve months—is much faster than the doubling rate for paradigm shift that I spoke about earlier, which is about ten years. Typically, we find that the doubling time for different measures—price-performance, bandwidth, capacity—of the capability of information technology is about one year.
The primary driving force of Moore’s Law is a reduction of semiconductor feature sizes, which shrink by half every 5.4 years in each dimension. (See the figure below.) Since chips are functionally two-dimensional, this means doubling the number of elements per square millimeter every 2.7 years.22
The following charts combine historical data with the semiconductor-industry road map (International Technology Roadmap for Semiconductors [ITRS] from Sematech), which projects through 2018.
The cost of DRAM (dynamic random access memory) per square millimeter has also been coming down. The doubling time for bits of DRAM per dollar has been only 1.5 years.23
A similar trend can be seen with transistors. You could buy one transistor for a dollar in 1968; in 2002 a dollar purchased about ten million transistors. Since DRAM is a specialized field that has seen its own innovation, the halving time for average transistor price is slightly slower than for DRAM, about 1.6 years (see the figure below).24
This remarkably smooth acceleration in price-performance of semiconductors has progressed through a series of stages of process technologies (defined by feature sizes) at ever smaller dimensions. The key feature size is now dipping below one hundred nanometers, which is considered the threshold of “nanotechnology.”25
Unlike Gertrude Stein’s rose, it is not the case that a transistor is a transistor is a transistor. As they have become smaller and less expensive, transistors have also become faster by a factor of about one thousand over the course of the past thirty years (see the figure below)—again, because the electrons have less distance to travel.26
If we combine the exponential trends toward less-expensive transistors and faster cycle times, we find a halving time of only 1.1 years in the cost per transistor cycle (see the figure below).27 The cost per transistor cycle is a more accurate overall measure of price-performance because it takes into account both speed and capacity. But the cost per transistor cycle still does not take into account innovation at higher levels of design (such as microprocessor design) that improves computational efficiency.
The number of transistors in Intel processors has doubled every two years (see the figure below). Several other factors have boosted price-performance, including clock speed, reduction in cost per microprocessor, and processor design innovations.28
Processor performance in MIPS has doubled every 1.8 years per processor (see the figure below). Again, note that the cost per processor has also declined through this period.29
If I examine my own four-plus decades of experience in this industry, I can compare the MIT computer I used as a student in the late 1960s to a recent notebook. In 1967 I had access to a m
ultimillion-dollar IBM 7094 with 32K (36-bit) words of memory and a quarter of a MIPS processor speed. In 2004 I used a $2,000 personal computer with a half-billion bytes of RAM and a processor speed of about 2,000 MIPS. The MIT computer was about one thousand times more expensive, so the ratio of cost per MIPS is about eight million to one.
My recent computer provides 2,000 MIPS of processing at a cost that is about 224 lower than that of the computer I used in 1967. That’s 24 doublings in 37 years, or about 18.5 months per doubling. If we factor in the increased value of the approximately 2,000 times greater RAM, vast increases in disk storage, and the more powerful instruction set of my circa 2004 computer, as well as vast improvements in communication speeds, more powerful software, and other factors, the doubling time comes down even further.
Despite this massive deflation in the cost of information technologies, demand has more than kept up. The number of bits shipped has doubled every 1.1 years, faster than the halving time in cost per bit, which is 1.5 years.30 As a result, the semiconductor industry enjoyed 18 percent annual growth in total revenue from 1958 to 2002.31 The entire information-technology (IT) industry has grown from 4.2 percent of the gross domestic product in 1977 to 8.2 percent in 1998.32 IT has become increasingly influential in all economic sectors. The share of value contributed by information technology for most categories of products and services is rapidly increasing. Even common manufactured products such as tables and chairs have an information content, represented by their computerized designs and the programming of the inventory-procurement systems and automated-fabrication systems used in their assembly.