Yet since a period of only a few centuries is sufficient to progress from mechanical technology to the vast explosion of intelligence and communication of the Singularity, under the SETI assumption there should be billions of civilizations in our light sphere (thousands or millions in our galaxy) whose technology is ahead of ours to an unimaginable degree. In at least some discussions of the SETI project, we see the same kind of linear thinking that permeates every other field, assumptions that civilizations will reach our level of technology, and that technology will progress from that point very gradually for thousands if not millions of years. Yet the jump from the first stirrings of radio to powers that go beyond a mere type II civilization takes only a few hundred years. So the skies should be ablaze with intelligent transmissions.

  Yet the skies are quiet. It is odd and intriguing that we find the cosmos so silent. As Enrico Fermi asked in the summer of 1950, “Where is everybody?”73 A sufficiently advanced civilization would not be likely to restrict its transmissions to subtle signals on obscure frequencies. Why are all the ETIs so shy?

  There have been attempts to respond to the so-called Fermi Paradox (which, granted, is a paradox only if one accepts the optimistic parameters that most observers apply to the Drake equation). One common response is that a civilization may obliterate itself once it reaches radio capability. This explanation might be acceptable if we were talking about only a few such civilizations, but with the common SETI assumptions implying billions of them, it is not credible to believe that every one of them destroyed itself.

  Other arguments run along this same line. Perhaps “they” have decided not to disturb us (given how primitive we are) and are just watching us quietly (an ethical guideline that will be familiar to Star Trek fans). Again, it is hard to believe that every such civilization out of the billions that should exist has made the same decision. Or, perhaps, they have moved on to more capable communication paradigms. I do believe that more capable communication methods than electromagnetic waves—even very high-frequency ones—are likely to be feasible and that an advanced civilization (such as we will become over the next century) is likely to discover and exploit them. But it is very unlikely that there would be absolutely no role left for electromagnetic waves, even as a by-product of other technological processes, in any of these many millions of civilizations.

  Incidentally, this is not an argument against the value of the SETI project, which should have high priority, because the negative finding is no less important than a positive result.

  The Limits of Computation Revisited. Let’s consider some additional implications of the law of accelerating returns to intelligence in the cosmos. In chapter 3 I discussed the ultimate cold laptop and estimated the optimal computational capacity of a one-liter, one-kilogram computer at around 1042 cps, which is sufficient to perform the equivalent of ten thousand years of the thinking of ten billion human brains in ten microseconds. If we allow more intelligent management of energy and heat, the potential in one kilogram of matter to compute may be as high as 1050 cps.

  The technical requirements to achieve computational capacities in this range are daunting, but as I pointed out, the appropriate mental experiment is to consider the vast engineering ability of a civilization with 1042 cps per kilogram, not the limited engineering ability of humans today. A civilization at 1042 cps is likely to figure out how to get to 1043 cps and then to 1044 and so on. (Indeed, we can make the same argument at each step to get to the next.)

  Once civilization reaches these levels it is obviously not going to restrict its computation to one kilogram of matter, any more than we do so today. Let’s consider what our civilization can accomplish with the mass and energy in our own vicinity. The Earth contains a mass of about 6 × 1024 kilograms. Jupiter has a mass of about 1.9 × 1027 kilograms. If we ignore the hydrogen and helium, we have about 1.7 × 1026 kilograms of matter in the solar system, not including the sun (which ultimately is also fair game). The overall solar system, which is dominated by the sun, has a mass of about 2 × 1030 kilograms. As a crude upper-bound analysis, if we apply the mass in the solar system to our 1050 estimate of the limit of computational capacity per kilogram of matter (based on the limits for nanocomputing), we get a limit of 1080 cps for computation in our “vicinity.”

  Obviously, there are practical considerations that are likely to provide difficulty in reaching this kind of upper limit. But even if we devoted one twentieth of 1 percent (0.0005) of the matter of the solar system to computational and communication resources, we get capacities of 1069 cps for “cold” computing and 1077 cps for “hot” computing.74

  Engineering estimates have been made for computing at these scales that take into consideration complex design requirements such as energy usage, heat dissipation, internal communication speeds, the composition of matter in the solar system, and many other factors. These designs use reversible computing, but as I pointed out in chapter 3, we still need to consider the energy requirements for correcting errors and communicating results. In an analysis by computational neuroscientist Anders Sandberg, the computational capacity of an Earth-size computational “object” called Zeus was reviewed.75 The conceptual design of this “cold” computer, consisting of about 1025 kilograms of carbon (about 1.8 times the mass of the Earth) in the form of diamondoid consists of 5 × 1037 computational nodes, each of which uses extensive parallel processing. Zeus provides an estimated peak of 1061 cps of computation or, if used for data storage, 1047 bits. A primary limiting factor for the design is the number of bit erasures permitted (it allows up to 2.6 × 1032 bit erasures per second), which are primarily used to correct errors from cosmic rays and quantum effects.

  In 1959 astrophysicist Freeman Dyson proposed a concept of curved shells around a star as a way to provide both energy and habitats for an advanced civilization. One conception of the Dyson Sphere is quite literally a thin sphere around a star to gather energy.76 The civilization lives in the sphere, and gives off heat (infrared energy) outside the sphere (away from the star). Another (and more practical) version of the Dyson Sphere is a series of curved shells, each of which blocks only a portion of the star’s radiation. In this way Dyson Shells can be designed to have no effect on existing planets, particularly those, like the Earth, that harbor an ecology that needs to be protected.

  Although Dyson proposed his concept as a means of providing vast amounts of space and energy for an advanced biological civilization, it can also be used as the basis for star-scale computers. Such Dyson Shells could orbit our sun without affecting the sunlight reaching the Earth. Dyson imagined intelligent biological creatures living in the shells or spheres, but since civilization moves rapidly toward nonbiological intelligence once it discovers computation, there would be no reason to populate the shells with biological humans.

  Another refinement of the Dyson concept is that the heat radiated by one shell could be captured and used by a parallel shell that is placed at a position farther from the sun. Computer scientist Robert Bradbury points out that there could be any number of such layers and proposes a computer aptly called a “Matrioshka brain,” organized as a series of nested shells around the sun or another star. One such conceptual design analyzed by Sandberg is called Uranos, which is designed to use 1 percent of the nonhydrogen, nonhelium mass in the solar system (not including the sun), or about 1024 kilograms, a bit smaller than Zeus.77 Uranos provides about 1039 computational nodes, an estimated 1051 cps of computation, and about 1052 bits of storage.

  Computation is already a widely distributed—rather than centralized—resource, and my expectation is that the trend will continue toward greater decentralization. However, as our civilization approaches the densities of computation envisioned above, the distribution of the vast number of processors is likely to have characteristics of these conceptual designs. For example, the idea of Matrioshka shells would take maximal advantage of solar power and heat dissipation. Note that the computational powers of these solar system–scale comp
uters will be achieved, according to my projections in chapter 2, around the end of this century.

  Bigger or Smaller. Given that the computational capacity of our solar system is in the range of 1070 to 1080 cps, we will reach these limits early in the twenty-second century, according to my projections. The history of computation tells us that the power of computation expands both inward and outward. Over the last several decades we have been able to place twice as many computational elements (transistors) on each integrated circuit chip about every two years, which represents inward growth (toward greater densities of computation per kilogram of matter). But we are also expanding outward, in that the number of chips is expanding (currently) at a rate of about 8.3 percent per year.78 It is reasonable to expect both types of growth to continue, and for the outward growth rate to increase significantly once we approach the limits of inward growth (with three-dimensional circuits).

  Moreover, once we bump up against the limits of matter and energy in our solar system to support the expansion of computation, we will have no choice but to expand outward as the primary form of growth. We discussed earlier the speculation that finer scales of computation might be feasible—on the scale of subatomic particles. Such pico- or femtotechnology would permit continued growth of computation by continued shrinking of feature sizes. Even if this is feasible, however, there are likely to be major technical challenges in mastering subnanoscale computation, so the pressure to expand outward will remain.

  Expanding Beyond the Solar System. Once we do expand our intelligence beyond the solar system, at what rate will this take place? The expansion will not start out at the maximum speed; it will quickly achieve a speed within a vanishingly small change from the maximum speed (speed of light or greater). Some critics have objected to this notion, insisting that it would be very difficult to send people (or advanced organisms from any other ETI civilization) and equipment at near the speed of light without crushing them. Of course, we could avoid this problem by accelerating slowly, but another problem would be collisions with interstellar material. But again, this objection entirely misses the point of the nature of intelligence at this stage of development. Early ideas about the spread of ETI through the galaxy and universe were based on the migration and colonization patterns from our human history and basically involved sending settlements of humans (or, in the case of other ETI civilizations, intelligent organisms) to other star systems. This would allow them to multiply through normal biological reproduction and then continue to spread in like manner from there.

  But as we have seen, by late in this century nonbiological intelligence on the Earth will be many trillions of times more powerful than biological intelligence, so sending biological humans on such a mission would not make sense. The same would be true for any other ETI civilization. This is not simply a matter of biological humans sending robotic probes. Human civilization by that time will be nonbiological for all practical purposes.

  These nonbiological sentries would not need to be very large and in fact would primarily comprise information. It is true, however, that just sending information would not be sufficient, for some material-based device that can have a physical impact on other star and planetary systems must be present. However, it would be sufficient for the probes to be self-replicating nanobots (note that a nanobot has nanoscale features but that the overall size of a nanobot is measured in microns).79 We could send swarms of many trillions of them, with some of these “seeds” taking root in another planetary system and then replicating by finding the appropriate materials, such as carbon and other needed elements, and building copies of themselves.

  Once established, the nanobot colony could obtain the additional information it needs to optimize its intelligence from pure information transmissions that involve only energy, not matter, and that are sent at the speed of light. Unlike large organisms such as humans, these nanobots, being extremely small, could travel at close to the speed of light. Another scenario would be to dispense with the information transmissions and embed the information needed in the nanobots’ own memory. That’s an engineering decision we can leave to these future superengineers.

  The software files could be spread out among billions of devices. Once one or a few of them get a “foothold” by self-replicating at a destination, the now much larger system could gather up the nanobots traveling in the vicinity so that from that time on, the bulk of the nanobots sent in that direction do not simply fly by. In this way, the now established colony can gather up the information, as well as the distributed computational resources, it needs to optimize its intelligence.

  The Speed of Light Revisited. In this way the maximum speed of expansion of a solar system–size intelligence (that is, a type II civilization) into the rest of the universe would be very close to the speed of light. We currently understand the maximum speed to transmit information and material objects to be the speed of light, but there are at least suggestions that this may not be an absolute limit.

  We have to regard the possibility of circumventing the speed of light as speculative, and my projections of the profound changes that our civilization will undergo in this century make no such assumption. However, the potential to engineer around this limit has important implications for the speed with which we will be able to colonize the rest of the universe with our intelligence.

  Recent experiments have measured the flight time of photons at nearly twice the speed of light, a result of quantum uncertainty on their position.80 However, this result is really not useful for this analysis, because it does not actually allow information to be communicated faster than the speed of light, and we are fundamentally interested in communication speed.

  Another intriguing suggestion of an action at a distance that appears to occur at speeds far greater than the speed of light is quantum disentanglement. Two particles created together may be “quantum entangled,” meaning that while a given property (such as the phase of its spin) is not determined in either particle, the resolution of this ambiguity of the two particles will occur at the same moment. In other words, if the undetermined property is measured in one of the particles, it will also be determined as the exact same value at the same instant in the other particle, even if the two have traveled far apart. There is an appearance of some sort of communication link between the particles.

  This quantum disentanglement has been measured at many times the speed of light, meaning that resolution of the state of one particle appears to resolve the state of the other particle in an amount of time that is a small fraction of the time it would take if the information were transmitted from one particle to the other at the speed of light (in theory, the time lapse is zero). For example, Dr. Nicolas Gisin of the University of Geneva sent quantum-entangled photons in opposite directions through optical fibers across Geneva. When the photons were seven miles apart, they each encountered a glass plate. Each photon had to “decide” whether to pass through or bounce off the plate (which previous experiments with non-quantum-entangled photons have shown to be a random choice). Yet because the two photons were quantum entangled, they made the same decision at the same moment. Many repetitions provided the identical result.81

  The experiments have not absolutely ruled out the explanation of a hidden variable—that is, an unmeasurable state of each particle that is in phase (set to the same point in a cycle), so that when one particle is measured (for example, has to decide its path through or off a glass plate), the other has the same value of this internal variable. So the “choice” is generated by an identical setting of this hidden variable, rather than being the result of actual communication between the two particles. However, most quantum physicists reject this interpretation.

  Yet even if we accept the interpretation of these experiments as indicating a quantum link between the two particles, the apparent communication is transmitting only randomness (profound quantum randomness) at speeds far greater than the speed of light, not predetermined information, such as the bits in a file. This
communication of quantum random decisions to different points in space could have value, however, in applications such as providing encryption codes. Two different locations could receive the same random sequence, which could then be used by one location to encrypt a message and by the other to decipher it. It would not be possible for anyone else to eavesdrop on the encryption code without destroying the quantum entanglement and thereby being detected. There are already commercial encryption products incorporating this principle. This is a fortuitous application of quantum mechanics because of the possibility that another application of quantum mechanics—quantum computing—may put an end to the standard method of encryption based on factoring large numbers (which quantum computing, with a large number of entangled qubits, would be good at).

  Yet another faster-than-the-speed-of-light phenomenon is the speed with which galaxies can recede from each other as a result of the expansion of the universe. If the distance between two galaxies is greater than what is called the Hubble distance, then these galaxies are receding from one another at faster than the speed of light.82 This does not violate Einstein’s special theory of relativity, because this velocity is caused by space itself expanding rather than the galaxies moving through space. However, it also doesn’t help us transmit information at speeds faster than the speed of light.

  Wormholes. There are two exploratory conjectures that suggest ways to circumvent the apparent limitation of the speed of light. The first is to use wormholes—folds of the universe in dimensions beyond the three visible ones. This does not really involve traveling at speeds faster than the speed of light but merely means that the topology of the universe is not the simple three dimensional space that naive physics implies. However, if wormholes or folds in the universe are ubiquitous, perhaps these shortcuts would allow us to get everywhere quickly. Or perhaps we can even engineer them.