Page 28 of Pale Blue Dot


  ON OCTOBER 12, 1992—auspiciously or otherwise the 500th anniversary of the "discovery" of America by Christopher Columbus—NASA turned on its new SETI program. At a radio telescope in the Mojave Desert, a search was initiated intended to cover the entire sky systematically—like META, making no guesses about which stars are more likely, but greatly expanding the frequency coverage. At the Arecibo Observatory, an even more sensitive NASA study began that concentrated on promising nearby star systems. When fully operational, the NASA searches would have been able to detect much fainter signals than META, and look for kinds of signals that META could not.

  The META experience reveals a thicket of background static and radio interference. Quick reobservation and confirmation of the signal—specially at other, independent radio telescopes—is the key to being sure. Horowitz and I gave NASA scientists the coordinates of our fleeting and enigmatic events. Perhaps they would be able to confirm and clarify our results. The NASA program was also developing new technology, stimulating ideas, and exciting schoolchildren. In the eyes of many it was well worth the $10 million a year being spent on it. But almost exactly a year after authorizing it, Congress pulled the plug on NASA's SETI program. It cost too much, they said. The post-Cold War U.S. defense budget is some 30,000 times larger.

  The chief argument of the principal opponent of the NASA SETI program—Senator Richard Bryan of Nevada—was this [from the Congressional Record for September 22, 1993]:

  So far, the NASA SETI Program has found nothing. In fact, all the decades of SETI research have found no confirmable signs of extraterrestrial life.

  Even with the current NASA version of SETI, I do not think many of its scientists would be willing to guarantee that we are likely to see any tangible results in the [foreseeable] future . . .

  Scientific research rarely, if ever, offers guarantees of success—and I understand that—and the full benefits of such research are often unknown until very late in the process. And I accept that, as well.

  In the case of SETI, however, the chances of success are so remote, and the likely benefits of the program are so limited, that there is little justification for 12 million taxpayer dollars to be expended for this program.

  But how, before we have found extraterrestrial intelligence, can we "guarantee" that we will find it? How, on the other hand, can we know that the chances of success are "remote"? And if we find extraterrestrial intelligence, are the benefits really likely to be "so limited"? As in all great exploratory ventures, we do not know what we will find and we don't know the probability of finding it. If we did, we would not have to look.

  SETI is one of those search programs irritating to those who want well-defined cost/benefit ratios. Whether ETI can be found; how long it would take to find it; and what it would cost to do so are all unknown. The benefits might be enormous, but we can't really be sure of that either. It would of course be foolish to spend a major fraction of the national treasure on such ventures, but I wonder if civilizations cannot be calibrated by whether they pay some attention to trying to solve the great problems.

  Despite these setbacks, a dedicated band of scientists and engineers, centered at the SETI Institute in Palo Alto, California, has decided to go ahead, government or no government. NASA has given them permission to use the equipment already paid for; captains of the electronics industry have donated a few million dollars; at least one appropriate radio telescope is available; and the initial stages of this grandest of all SETI programs is on track. If it can demonstrate that a useful sky survey is possible without being swamped by background noise—and especially if, as is very likely from the META experience, there are unexplained candidate signals-perhaps Congress will change its mind once more and fund the project.

  Meanwhile, Paul Horowitz has come up with a new program—different from META, different from what NASA was doing—called BETA. BETA stands for "Billion-channel ExtraTerrestrial Assay." It combines narrow-band sensitivity, wide frequency coverage, and a clever way to verify signals as they're detected. If The Planetary Society can find the additional support, this system—much cheaper than the former NASA program—should be on the air soon.

  WOULD I LIKE TO BELIEVE that with META we've detected transmissions from other civilizations out there in the dark, sprinkled through the vast Milky Way Galaxy? You bet. After decades of wondering and studying this problem, of course I would. To me, such a discovery would be thrilling. It would change everything. We would be hearing from other beings, independently evolved over billions of years, viewing the Universe perhaps very differently, probably much smarter, certainly not human. How much do they know that we don't?

  For me, no signals, no one calling out to us is a depressing prospect. "Complete silence," said Jean-Jacques Rousseau in a different context, "induces melancholy; it is an image of death." But I'm with Henry David Thoreau: "Why should I feel lonely? Is not our planet in the Milky Way?"

  The realization that such beings exist and that, as the evolutionary process requires, they must be very different from us, would have a striking implication: Whatever differences divide us down here on Earth are trivial compared to the differences between any of us and any of them. Maybe it's a long shot, but the discovery of extraterrestrial intelligence might play a role in unifying our squabbling and divided planet. It would be the last of the Great Demotions, a rite of passage for our species and a transforming event in the ancient quest to discover our place in the Universe.

  In our fascination with SETI, we might be tempted, even without good evidence, to succumb to belief but this would be self-indulgent and foolish. We must surrender our skepticism only in the face of rock-solid evidence. Science demands a tolerance for ambiguity. Where we are ignorant, we withhold belief. Whatever annoyance the uncertainty engenders serves a higher purpose: It drives us to accumulate better data. This attitude is the difference between science and so much else. Science offers little in the way of cheap thrills. The standards of evidence are strict. But when followed they allow us to see far, illuminating even a great darkness.

  CHAPTER 21: TO THE SKY!

  The stairs of the sky are let down for him that he may ascend thereon to

  heaven. O gods, put your arms under the king: raise him, lift him to the sky.

  To the sky! To the sky!

  —HYMN FOR A DEAD PHARAOH (EGYPT, CA. 2600 B.C.)

  When my grandparents were children, the electric light, the automobile, the airplane, and the radio were Stupefying technological advances, the wonders of the age. You might hear wild stories about them, but you could not find a single exemplar in that little village in Austria-Hungary, near the banks of the river Bug. But in that same time, around the turn of the last century, there were two men who foresaw other, far more ambitious, inventions—Konstantin Tsiolkovsky, the theoretician, a nearly deaf schoolteacher in the obscure Russian town of Kaluga, and Robert Goddard, the engineer, a professor at an equally obscure American college in Massachusetts. They dreamt of using rockets to journey to the planets and the stars. Step by step, they worked out the fundamental physics and many of the details. Gradually, their machines took shape. Ultimately, their dream proved infectious.

  In their time, the very idea was considered disreputable, or even a symptom of some obscure derangement. Goddard found that merely mentioning a voyage to other worlds subjected him to ridicule, and he dared not publish or even discuss in public his long-term vision of flights to the stars. As teenagers, both had epiphanal visions of spaceflight that never left them. "I still have dreams in which I fly up to the stars in my machine," Tsiolkovsky wrote in middle age. "It is difficult to work all on your own for many years, in adverse conditions without a gleam of hope, without any help." Many of his contemporaries thought he was truly mad. Those who knew physics better than Tsiolkovsky and Goddard—including The New York Times in a dismissive editorial not retracted until the eve of Apollo 11—insisted that rockets could not work in a vacuum, that the Moon and the planets were forever beyond human r
each.

  A generation later, inspired by Tsiolkovsky and Goddard, Wernher von Braun was constructing the first rocket capable of reaching the edge of space, the V-2. But in one of those ironies with which the twentieth century is replete, von Braun was building it for the Nazis—as an instrument of indiscriminate slaughter of civilians, as a "vengeance weapon" for Hitler, the rocket factories staffed with slave labor, untold human suffering exacted in the construction of every booster, and von Braun himself made an officer in the SS. He was aiming at the Moon, he joked unselfconsciously, but hit London instead.

  Another generation later, building on the work of Tsiolkovsky and Goddard, extending von Braun's technological genius, we were up there in space, silently circumnavigating the Earth, treading the ancient and desolate lunar surface. Our machines—increasingly competent and autonomous—were spreading through the Solar System, discovering new worlds, examining them closely, searching for life, comparing them with Earth.

  This is one reason that in the long astronomical perspective there is something truly epochal about "now"—which we can define as the few centuries centered on the year you're reading this book. And there's a second reason: This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself—as well as to vast numbers of others. Let me recount the ways:

  • We've been burning fossil fuels for hundreds of thousands of years. By the 1960s, there were so many of us burning wood, coal, oil, and natural gas on so large a scale, that scientists began to worry about the increasing greenhouse effect; the dangers of global warming began slowly slipping into public consciousness.

  • CFCs were invented in the 1920s and 1930s; in 1974 they were discovered to attack the protective ozone layer. Fifteen years later a worldwide ban on their production was going into effect.

  • Nuclear weapons were invented in 1945. It took until 1983 before the global consequences of thermonuclear war were understood. By 1992, large numbers of warheads were being dismantled.

  • The first asteroid was discovered in 1801. More or less serious proposals to move them around were floated beginning in the 1980s. Recognition of the potential dangers of asteroid deflection technology followed shortly after.

  • Biological warfare has been with us for centuries, but its deadly mating with molecular biology has occurred only lately.

  • We humans have already precipitated extinctions of species on a scale unprecedented since the end of the Cretaceous Period. But only in the last decade has the magnitude of these extinctions become clear, and the possibility raised that in our ignorance of the

  interrelations of life on Earth we may be endangering our own future.

  Look at the dates on this list and consider the range of new technologies currently under development. Is it not likely that other dangers of our own making are yet to be discovered, some perhaps even more serious?

  In the littered field of discredited self-congratulatory chauvinisms, there is only one that seems to hold up, one sense in which we are special: Due to our own actions or inactions, and the misuse of our technology, we live at an extraordinary moment, for the Earth at least—the first time that a species has become able to wipe itself out. But this is also, we may note, the first time that a species has become able to journey to the planets and the stars. The two times, brought about by the same technology, coincide—a few centuries in the history of a 4.5-billion-year-old planet. If you were somehow dropped down on the Earth randomly at any moment in the past (or future), the chance of arriving at this critical moment would be less than 1 in 10 million. Our leverage on the future is high just now.

  It might be a familiar progression, transpiring on many worlds—a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others are not so lucky or so prudent, perish.

  Since, in the long run, every planetary society will be endangered by impacts from space, every surviving civilization is obliged to become spacefaring—-not because of exploratory or romantic zeal, but for the most practical reason imaginable: staying alive. And once you're out there in space for centuries and millennia, moving little worlds around and engineering planets, your species has been pried loose from its cradle. If they exist, many other civilizations will eventually venture far from home.1

  A MEANS HAS BEEN OFFERED of estimating how precarious our circumstances are—remarkably, without in any way addressing the nature of the hazards. J. Richard Gott III is an astrophysicist at Princeton University. He asks us to adopt a generalized Copernican principle, something I've described elsewhere as the Principle of Mediocrity. Chances are that we do not live in a truly extraordinary time. Hardly anyone ever did. The probability is high that we're born, live out our days, and die somewhere in the broad middle range of the lifetime of our species (or civilization, or nation). Almost certainly, Gott says, we do not live in first or last times. So if your species is very young, it follows that it's unlikely to last long—because if it were to last long, you (and the rest of us alive today) would be extraordinary in living, proportionally speaking, so near the beginning.

  What then is the projected longevity of our species? Gott concludes, at the 97.5 percent confidence level, that there will be humans for no more than 8 million years. That's his upper limit, about the same as the average lifetime of many mammalian species. In that case, our technology neither harms nor helps. But Gott's lower limit, with the same claimed reliability, is only 12 years. He will not give you 40-to-1 odds that humans will still be around by the time babies now alive become teenagers. In everyday life we try very hard not to take risks so large, not to board airplanes, say, with 1 chance in 40 of crashing. We will agree to surgery in which 95 percent of patients survive only if our disease has a greater than 5 percent chance of killing us. Mere 40-to-1 odds on our species surviving another 12 years Would be, if valid, a cause for supreme concern. If Gott is right, not only may we never be out among the stars; there's a fair chance we may not be around long enough even to make the first footfall on another planet.

  To me, this argument has a strange, vaporish quality. Knowing nothing about our species except how old it is, we make numerical estimates, claimed to be highly reliable, about its future prospects. How? We go with the winners. Those who have been around are likely to stay around. Newcomers tend to disappear. The only assumption is the quite plausible one that there is nothing special about the moment at which we inquire into the matter. So why is the argument unsatisfying? Is it just that we are appalled by its implications?

  Something like the Principle of Mediocrity must have very broad applicability. But we are not so ignorant as to imagine that everything is mediocre. There is something special about our time—not just the temporal chauvinism that those who reside in any epoch doubtless feel, but something, as outlined above, clearly unique and strictly relevant to our species' future chances: This is the first time that (a) our exponentiating technology has reached the precipice of self-destruction, but also the first time that (b) we can postpone or avoid destruction by going somewhere else, somewhere off the Earth.

  These two clusters of capabilities, (a) and (b), make our time extraordinary in directly contradictory ways—which both (a) strengthen and (b) weaken Gott's argument. I don't know how to predict whether the new destructive technologies will hasten, more than the new spaceflight technologies will delay, human extinction. But since never before have we
contrived the means of annihilating ourselves, and never before have w e developed the technology for settling other worlds, I think a compelling case can be made that our time is extraordinary precisely in the context of Gott's argument. If this is true, it significantly increases the margin of error in such estimates of future longevity. The worst is worse, and the best better: Our short-term prospects are even bleaker and—if we can survive the short-term—our long-term chances even brighter than Gott calculates.

  But the former is no more cause for despair than the latter is for complacency. Nothing forces us to be passive observers, clucking in dismay as our destiny inexorably works itself out. If we cannot quite seize fate by the neck, perhaps we can misdirect it, or mollify it, or escape it.

  Of course we must keep our planet habitable—not on a leisurely timescale of centuries or millennia, but urgently, on a timescale of decades or even years. This will involve changes in government, in industry, in ethics, in economics, and in religion. We've never done such a thing before, certainly not on a global scale. It may be too difficult for us. Dangerous technologies may be too widespread. Corruption may be too pervasive. Too many leaders may be focused on the short term rather than the long. There may be too many quarreling ethnic groups, nation-states, and ideologies for the right kind of global change to be instituted. We may be too foolish to perceive even what the real dangers are, or that much of what we hear about them is determined by those with a vested interest in minimizing fundamental change.

  However, we humans also have a history of making long-lasting social change that nearly everyone thought impossible. Since our earliest days, we've worked not just for our own advantage but for our children and our grandchildren. My grandparents and parents did so for me. We have often, despite our diversity, despite endemic hatreds, pulled together to face a common enemy. We seem, these days, much more willing to recognize the dangers before us than we were even a decade ago. The newly recognized dangers threaten all of us equally. No one can say how it will turn out down here.