I WAS JUST WONDERING ABOUT THAT.

  As we look back in time and get closer to the event of the big bang, chaos is shrinking to zero. Thus from the subjective perspective, time is stretching out. Indeed, as we go back in time and approach the big bang, subjective time approaches infinity. Thus it is not possible to go back past a subjective infinity of time.

  THAT’S A LOAD OFF MY MIND. NOW YOU SAID THAT THE EXPONENTIAL PROGRESS OF AN EVOLUTIONARY PROCESS GOES ON FOREVER. IS THERE ANYTHING THAT CAN STOP IT?

  Only a catastrophe that wipes out the entire process.

  SUCH AS AN ALL-OUT NUCLEAR WAR?

  That’s one scenario, but in the next century, we will encounter a plethora of other “failure modes.” We’ll talk about this in later chapters.

  I CAN’T WAIT. NOW TELL ME THIS, WHAT DOES THE LAW OF ACCELERATING RETURNS HAVE TO DO WITH THE TWENTY-FIRST CENTURY?

  Exponential trends are immensely powerful but deceptive. They linger for eons with very little effect. But once they reach the “knee of the curve,” they explode with unrelenting fury. With regard to computer technology and its impact on human society, that knee is approaching with the new millennium. Now I have a question for you.

  SHOOT.

  Just who are you anyway?

  WHY, I’M THE READER.

  Of course. Well, it’s good to have you contributing to the book while there’s still time to do something about it.

  GLAD TO. Now, YOU NEVER DID GIVE THE ENDING TO THE EMPEROR STORY. SO DOES THE EMPEROR LOSE HIS EMPIRE, OR DOES THE INVENTOR LOSE HIS HEAD?

  I have two endings, so I just can’t say.

  MAYBE THEY REACH A COMPROMISE SOLUTION. THE INVENTOR MIGHT BE HAPPY TO SETTLE FOR, SAY, JUST ONE PROVINCE OF CHINA.

  Yes, that would be a good result. And maybe an even better parable for the twenty-first century.

  CHAPTER TWO

  THE INTELLIGENCE OF EVOLUTION

  Here’s another critical question for understanding the twenty-first century: Can an intelligence create another intelligence more intelligent than itself?

  Let’s first consider the intelligent process that created us: evolution.

  Evolution is a master programmer. It has been prolific, designing millions of species of breathtaking diversity and ingenuity And that’s just here on Earth. The software programs have been all written down, recorded as digital data in the chemical structure of an ingenious molecule called deoxyribonucleic acid, or DNA. DNA was first described by J. D. Watson and E H. C. Crick in 1953 as a double helix consisting of a twisting pair of strands of polynucleotides with two bits of information encoded at each ledge of a spiral staircase, encoded by the choice of nucleotides.1 This master “read only” memory controls the vast machinery of life.

  Supported by a twisting sugar-phosphate backbone, the DNA molecule consists of between several dozen and several million rungs, each of which is coded with one nucleotide letter drawn from a four-letter alphabet of base pairs (adenine-thymine, thymine-adenine, cytosine-guanine, and guanine-cytosine). Human DNA is a long molecule—it would measure up to six feet in length if stretched out—but it is packed into an elaborate coil onlyof an inch across.

  The mechanism to peel off copies of the DNA code consists of other special machines: organic molecules called enzymes, which split each base pair and then assemble two identical DNA molecules by rematching the broken base pairs. Other little chemical machines then verify the validity of the copy by checking the integrity of the base-pair matches. The error rate of these chemical information-processing transactions is about one error in a billion base-pair replications. There are further redundancy and error-correction codes built into the data itself, so meaningful mistakes are rare. Some mistakes do get through, most of which cause defects in a single cell. Mistakes in an early fetal cell may cause birth defects in the newborn organism. Once in a long while such defects offer an advantage, and this new encoding may eventually be favored through the enhanced survival of that organism and its offspring.

  The DNA code controls the salient details of the construction of every cell in the organism, including the shapes and processes of the cell, and of the organs comprised of the cells. In a process called translation, other enzymes translate the coded DNA information by building proteins. It is these proteins that define the structure, behavior, and intelligence of each cell, and of the organism.2

  This computational machinery is at once remarkably complex and amazingly simple. Only four base pairs provide the data storage for the complexity of all the millions of life-forms on Earth, from primitive bacteria to human beings. The ribosomes—little tape-recorder molecules—read the code and build proteins from only twenty amino acids. The synchronized flexing of muscle cells, the intricate biochemical interactions in our blood, the structure and functioning of our brains, and all of the other diverse functions of the Earth’s creatures are programmed in this efficient code.

  The genetic information-processing appliance is an existence proof of nanoengineering (building machines atom by atom), because the machinery of life indeed takes place on the atomic level. Tiny bits of molecules consisting of just dozens of atoms encode each bit and perform the transcription, error detection, and correction functions. The actual building of the organic stuff is conducted atom by atom with the building of the amino acid chains.

  This is our understanding of the hardware of the computational engine driving life on Earth. We are just beginning, however, to unravel the software. While prolific, evolution has been a sloppy programmer. It has left us the object code (billions of bits of coded data), but there is no higher-level source code (statements in a language we can understand), no explanatory comments, no “help” file, no documentation, and no user manual. Through the Human Genome Project, we are in the process of writing down the 6-billion-bit code for the human genetic code, and are capturing the code for thousands of other species as well.3 But reverse engineering the genome code—understanding how it works—is a slow and laborious process that we are just beginning. As we do this, however, we are learning the information-processing basis of disease, maturation, and aging, and are gaining the means to correct and refine evolution’s unfinished invention.

  In addition to evolution’s lack of documentation, it is also a very inefficient programmer. Most of the code—97 percent according to current estimates—does not compute; that is, most of the sequences do not produce proteins and appear to be useless. That means that the active part of the code is only about 23 megabytes, which is less than the code for Microsoft Word. The code is also replete with redundancies. For example, an apparently meaningless sequence called Alu, comprising 300 nucleotide letters, occurs 300,000 times in the human genome, representing more than 3 percent of our genetic program.

  The theory of evolution states that programming changes are introduced essentially at random. The changes are evaluated for retention by survival of the entire organism and its ability to reproduce. Yet the genetic program controls not just the one characteristic being “experimented” with, but millions of other features as well. Survival of the fittest appears to be a crude technique capable of concentrating on one or at most a few characteristics at a time. Since the vast majority of changes make things worse, it may seem surprising that this technique works at all.

  . This contrasts with the conventional human approach to computer programming in which changes are designed with a purpose in mind, multiple changes may be introduced at a time, and the changes made are tested by focusing in on each change, rather than by overall survival of the program. If we attempted to improve our computer programs the way that evolution apparently improves its design, our programs would collapse from increasing randomness.

  It is remarkable that by concentrating on one refinement at a time, such elaborate structures as the human eye could have been designed. Some observers have postulated that such intricate design is impossible through the incremental-refinement method that evolution uses. A design as intricate as the eye or th
e heart would appear to require a design methodology in which it was designed all at once.

  However, the fact that designs such as the eye have many interacting aspects does not rule out its creation through a design path comprising one small refinement at a time. In utero, the human fetus appears to go through a process of evolution, although whether this is a corollary of the phases of evolution that led to our subspecies is not universally accepted. Nonetheless, most medical students learn that ontogeny (fetal development) recapitulates phylogeny (evolution of a genetically related group of organisms, such as a phylum). We appear to start out in the womb with similarities to a fish embryo, progress to an amphibian, then a mammal, and so on. Regardless of the phylogeny controversy, we can see in the history of evolution the intermediate design drafts that evolution went through in designing apparently “complete” mechanisms such as the human eye. Even though evolution focuses on just one issue at a time, it is indeed capable of creating striking designs with many interacting parts.

  There is a disadvantage, however, to evolution’s incremental method of design : It can’t easily perform complete redesigns. It is stuck, for example, with the very slow computing speed of the mammalian neuron. But there is a way around this, as we will explore in chapter 6, “Building New Brains.”

  The Evolution of Evolution

  There are also certain ways in which evolution has evolved its own means for evolution. The DNA-based coding itself is clearly one such means. Within the code, other means have developed. Certain design elements, such as the shape of the eye, are coded in a way that makes mutations less likely The error detection and correction mechanisms built into the DNA-based coding make changes in these regions very unlikely This enforcement of design integrity for certain critical features evolved because they provide an advantage—changes to these characteristics are usually catastrophic. Other design elements, such as the number and layout of light-sensitive rods and cones in the retina, have fewer design enforcements built into the code. If we examine the evolutionary record, we do see more recent change in the layout of the retina than in the shape of the eyeball itself. So in certain ways, the strategies of evolution have evolved. The Law of Accelerating Returns says that it should, for evolving its own strategies is the primary way that an evolutionary process builds on itself.

  By simulating evolution, we can also confirm the ability of evolution’s “one step at a time” design process to build ingenious designs of many interacting elements. One example is a software simulation of the evolution of life-forms called Network Tierra designed by Thomas Ray, a biologist and rain forest expert.4 Ray’s “creatures” are software simulations of organisms in which each “cell” has its own DNA-like genetic code. The organisms compete with each other for the limited simulated space and energy resources of their simulated environment.

  A unique aspect of this artificial world is that the creatures have free rein of 150 computers on the Internet, like “islands in an archipelago” according to Ray. One of the goals of this research is to understand how the explosion of diverse body plans that occurred on Earth during the Cambrian period some 570 million years ago was possible. “To watch evolution unfold is a thrill,” Ray exclaimed as he watched his creatures evolve from unspecialized single-celled organisms to multicellular organisms with at least modest increases in diversity. Ray has reportedly identified the equivalent of parasites, immunities, and crude social interaction. One of the acknowledged limitations in Ray’s simulation is a lack of complexity in his simulated environment. One insight of this research is the need for a suitably chaotic environment as a key resource needed to push evolution along, a resource in ample supply in the real world.

  A practical application of evolution is the area of evolutionary algorithms, in which millions of evolving computer programs compete with one another in a simulated evolutionary process, thereby harnessing the inherent intelligence of evolution to solve real-world problems. Since the intelligence of evolution is weak, we focus and amplify it the same way a lens concentrates the sparse rays of the sun. We’ll talk more about this powerful approach to software design in chapter 4, “A New Form of Intelligence on Earth.”

  The Intelligence Quotient of Evolution

  Let us first praise evolution. It has created a plethora of designs of indescribable beauty complexity, and elegance, not to mention effectiveness. Indeed, some theories of aesthetics define beauty as the degree of success in emulating the natural beauty that evolution has created. It created human beings with their intelligent human brains, beings smart enough to create their own intelligent technology.

  Its intelligence seems vast. Or is it? It has one deficiency—evolution is very slow. While it is true that it has created some remarkable designs, it has taken an extremely long period of time to do so. It took eons for the process to get started, and, for the evolution of life-forms, eons meant billions of years. Our human forebears also took eons to get started in their creation of technology, but for us eons meant only tens of thousands of years, a distinct improvement.

  Is the length of time required to solve a problem or create an intelligent design relevant to an evaluation of intelligence? The authors of our human intelligence-quotient tests seem to think so, which is why most IQ tests are timed. We regard solving a problem in a few seconds better than solving it in a few hours or years. Periodically, the timed aspect of IQ tests gives rise to controversy, but it shouldn’t. The speed of an intelligent process is a valid aspect of its evaluation. If a large, hunched, catlike animal perched on a tree limb suddenly appears out of my left cornea, designing an evasive tactic in a second or two is preferable to pondering the challenge for a few hours. If your boss asks you to design a marketing program, she probably doesn’t want to wait a hundred years. Viking Penguin wanted this book delivered before the end of the second, not the third, millennium.5

  Evolution has achieved an extraordinary record of design, yet has taken an extraordinarily long period of time to do so. If we factor its achievements by its ponderous pace, I believe we need to conclude that its intelligence quotient is only infinitesimally greater than zero. An IQ of only slightly greater than zero (defining truly arbitrary behavior as zero) is enough for evolution to beat entropy and create wonderful designs, given enough time, in the same way that an ever so slight asymmetry in the balance between matter and antimatter was enough to allow matter to almost completely overtake its antithesis.

  Evolution is thereby only a quantum smarter than completely unintelligent behavior. The reason that our human-created evolutionary algorithms are effective is that we speed up time a million- or billionfold, so as to concentrate and focus its otherwise diffuse power. In contrast, humans are a lot smarter than just a quantum greater than total stupidity (of course, your view may vary depending on the latest news reports).

  THE END OF THE UNIVERSE

  What does the Law of Time and Chaos say about the end of the Universe?

  One theory is that the Universe will continue its expansion forever. Alternatively, if there’s enough stuff, then the force of the Universe’s own gravity will stop the expansion, resulting in a final “big crunch.” Unless, of course, there’s an antigravity force. Or if the “cosmological constant,” Einstein’s “fudge factor,” is big enough. I’ve had to rewrite this paragraph three times over the past several months because the physicists can’t make up their minds. The latest speculation apparently favors indefinite expansion.

  Personally, I prefer the idea of the Universe closing in again on itself as more aesthetically pleasing. That would mean that the Universe would reverse its expansion and reach a singularity again. We can speculate that it would again expand and contract in an endless cycle. Most things in the Universe seem to move in cycles, so why not the Universe itself? The Universe could then be regarded as a tiny wave particle in some other really big Universe. And that big Universe would itself be a vibrating particle in yet another even bigger Universe. Conversely, the tiny wave particles in our
Universe can each be regarded as little Universes with each of their vibrations lasting fractions of a trillionth of a second in our Universe representing billions of years of expansion and contraction in that little Universe. And each particle in those little Universes could be ... okay, so I’m getting a little carried away.

  How to Unsmash a Cup

  Let’s say the Universe reverses its expansion. The phase of contraction has the opposite characteristics of the phase of expansion that we are now in. Clearly, chaos in the Universe will be decreasing as the Universe gets smaller. 1 can see that this is the case by considering the endpoint, which is again a singularity with no size, and therefore no disorder.

  We regard time as moving in one direction because processes in time are not generally reversible. If we smash a cup, we find it difficult to unsmash it. The reason for this has to do with the second law of thermodynamics. Since overall entropy may increase but can never decrease, time has directionality. Smashing a cup increases randomness. Unsmashing the cup would violate the second law of thermodynamics. Yet in the contracting phase of the Universe, chaos is decreasing, so we should regard time’s direction as reversed.