Page 14 of Relativity


  The writers of real SF refuse to sink to fear-mongering, but neither do we overindulge in boosterism—both are equally mindless activities.

  Still, we do have an essential societal role, one being fulfilled by no one else. Actual scientists are constrained in what they can say—even with tenure, which supposedly ensures the right to pursue any line of inquiry, scientists are in fact muzzled at the most fundamental, economic level. They cannot speculate openly about the potential downsides of their work, because they rely on government grants or private-sector consulting contracts.

  Well, the government is answerable to an often irrational public. If a scientist is dependent on government grants, those grants can easily disappear. And if he or she is employed in the private sector, well, then certainly Motorola doesn’t want you to say cellular phones might cause brain cancer; Dow Chemical didn’t want anyone to say that silicone implants might cause autoimmune problems; Philip Morris doesn’t want anyone to say that nicotine might be addictive.

  Granted, not all those potential dangers turned out to be real, but even considering them, putting them on the table for discussion, was not part of the game plan; indeed, suppressing possible negatives is key to how all businesses, including those built on science and technology, work.

  There are moments—increasingly frequent moments—during which the media reports that, “Science fiction has become science fact.” Certainly one of the most dramatic recent ones was made public in February 1997. Ian Wilmut at Roslin Institute in Edinburgh had succeeded in taking an adult mammalian cell and producing an exact genetic duplicate: the cloning of the sheep named Dolly.

  Dr. Wilmut was interviewed all over the world, and, of course, every reporter asked him about the significance of his work, the ramifications, the effects it would have on family life. And his response was doggedly the same, time and again: cloning, he said, had narrow applications in the field of animal husbandry.

  That was all he could say. He couldn’t answer the question directly. He couldn’t tell reporters that it was now technically possible for a man who was 35 years old, who had been drinking too much, and smoking, and never exercising, a man who had been warned by his doctor that his heart and lungs and liver would all give out by the time he was in his early fifties, to now order up an exact genetic duplicate of himself, a duplicate that by the time he needed all those replacement parts would be sixteen or seventeen years old, with pristine, youthful versions of the very organs that needed replacing, replacements that could be transplanted with zero chance of tissue rejection.

  Why, the man who needed these organs wouldn’t even have to go to any particular expense—just have the clone of himself created, put the clone up for adoption—possibly even an illegal adoption, in which the adopting parents pay money for the child, a common enough if unsavory practice, letting the man recover the costs of the cloning procedure. Then, let the adoptive parents raise the child with their money, and when it is time to harvest the organs, just track down the teenager, and kidnap him, and—well, you get the picture. Just another newspaper report of a missing kid.

  Far-fetched? Not that I can see; indeed, there may be adopted children out there right now who, unbeknownst to them or their guardians, are clones of the wunderkinds of Silicon Valley or the lions of Wall Street. But the man who cloned Dolly couldn’t speculate on this possibility, or any of the dozens of other scenarios that immediately come to mind. He couldn’t speculate because if he did, he’d be putting his future funding at risk. His continued ability to do research depended directly on him keeping his mouth shut.

  The same mindset was driven home for me quite recently. I am co-hosting a two-hour documentary called “Inventing the Future: 2000 Years of Discovery” for the Canadian version of The Discovery Channel, and in November 1999 I went to Princeton University to interview Joe Tsein, who created the “Doogie Mice”—mice that were born more intelligent than normal mice, and retained their smarts longer.

  While my producer and the camera operator fussed setting up the lighting, Dr. Tsein and I chatted animatedly about the ramifications of his research, and there was no doubt that he and his colleagues understood how far-reaching they would be. Indeed, by the door to Dr. Tsein’s lab, not normally seen by the public, is a cartoon of a giant rodent labeled “Doogie” sitting in front of a computer. In Doogie’s right hand is his computer’s pointing device—a little human figure labeled “Joe”: the super-smart mouse using its human creator as a computer mouse.

  Finally, the camera operator was ready, and we started taping. “So, Dr. Tsein,” I said, beginning the interview, “how did you come to create these super-intelligent mice?”

  And Tsein made a “cut” motion with his hand, and stepped forward, telling the camera operator to stop. “I don’t want to use the word ‘intelligent,’” he said. “We can talk about the mice having better memories, but not about them being smarter. The public will be all over me if they think we’re making animals more intelligent.”

  “But you are making them more intelligent,” said my producer. Indeed, Tsein had used the word “intelligent” repeatedly while we’d been chatting.

  “Yes, yes,” he said. “But I can’t say that for public consumption.”

  The muzzle was clearly on. We soldiered ahead with the interview, but never really got what we wanted. I’m not sure if Tsein was a science-fiction fan, and he had no idea that I was also a science-fiction writer, but many SF fans have wondered why Tsein didn’t name his super-smart mice “Algernons,” after the experimental rodent in Daniel Keyes’s Flowers for Algernon.

  Tsein might have been aware of the reference, but chose the much more palatable “Doogie”—a tip of the hat to the old TV show Doogie Howser, M.D., about a boy-genius who becomes a medical doctor while still a teenager—because, of course, in Flowers for Algernon, the leap is made directly from the work on mice to the mind-expanding possibilities for humans, and Tsein was clearly trying to restrain, not encourage, such leaps.

  So, we’re back to where we started: someone needs to openly do the speculation, to weigh the consequences, to consider the ramifications—someone who is immune to economic pressures. And that someone is the science-fiction writer.

  And, of course, we do precisely that—and have done so from the outset. Brian Aldiss, and many other critics, contend that the first science-fiction novel was Mary Shelley’s Frankenstein, and I think they’re right. In that novel, Victor is a scientist, and he’s learned about reanimating dead matter by studying the process of decay that occurs after death. Take out his scientific training, and his scientific research, and his scientific theory, and, for the first time in the history of fiction, there’s no story left. Like so much of the science fiction that followed, Frankenstein, first published in 1818, is a cautionary tale, depicting the things that can go wrong, in this case, with the notion of biological engineering.

  Science-fiction writers have considered the pluses and minuses of other new technologies, too, of course. We were among the first to weigh in on the dangers of nuclear power—memorably, for instance, with Judith Merril’s 1948 short story “That Only a Mother”—and, although there are still SF writers (often, it should be noted, with university or industry positions directly or indirectly involved in the defense industry) who have always sung the praises of nuclear energy, it’s a fact that all over the world, governments are turning away from it.

  The October 18, 1999, edition of Newsweek carried an article which said, “In most parts of the world, the chance of nuclear power plant accidents is now seen as too great. Reactor orders and start-ups have declined markedly since the 1980s. Some countries, including Germany and Sweden, plan to shut down their plants altogether…Nuclear-reactor orders and start-ups ranged from 20 to 40 per year in the 1980s; in 1997 there were just two new orders, and five start-ups worldwide. Last year [1998] construction began on only four new nuclear reactors.”

  Why the sharp decline? Because the cautionary scenarios about nuclear
accidents in science fiction have, time and again, become science fact. The International Atomic Energy Agency reports that there were 508 nuclear “incidents” between 1993 and 1998, an average of more than one for each of the world’s 434 operating nuclear power plants.

  It certainly wasn’t out of the scientific community that the warnings were first heard. I vividly recall being at a party about fifteen years ago at which I ran into an old friend from high school. She introduced me to her new husband, a nuclear engineer for Ontario Hydro, the company that operates the nuclear power plants near my home city of Toronto. I asked him what plans were in place in case something went wrong with one of the reactors (this was before the Chernobyl accident in 1986, but after Three Mile Island in 1979). He replied that nothing could go wrong; the system was foolproof. Although we were both early in our careers then, we were precisely fulfilling our respective societal roles. As an engineer employed by the nuclear industry, he had to say the plants were absolutely safe. As a science-fiction writer, I had to be highly skeptical of any such statements.

  Science fiction has weighed in on ecology, overpopulation, racism, the abortion debate (which is also fundamentally a technological issue—the ability to terminate a fetus without harming the mother is a scientific breakthrough whose moral ramifications must be weighed), and, indeed, science-fiction has been increasingly considering what I think may be the greatest threat of all, the downsides of creating artificial intelligence. From William Gibson’s Hugo-winning 1984 Neuromancer—in which an organization known as “Turing” exists to prevent the emergence of true AI—to my own Hugo-nominated 1998 Factoring Humanity, in which the one and only radio message Earth receives from another star is a warning against the creation of AI, a last gasp from biologicals being utterly supplanted by what they themselves had created without sufficient forethought.

  Which brings us back to the central message of SF: “Look with a skeptical eye at new technologies.” Has that message gotten through to the general public? Has society at large embraced it in a way that they never did embrace “Don’t commit murder because you will never get away with it”?

  And the answer, I think, is absolutely yes. Society has co-opted the science-fictional worldview wholly and completely. Do we now build a new dam just because we can? Not without an environmental-impact study. Do we put high-energy power lines near public schools? Not anymore. Did we all rush out to start eating potato chips made with Olestra, the fake fat that robs the body of nutrients and causes abdominal cramping and loose stools? No.

  And what about the example I started with—cloning? Indeed, what about the whole area of genetic research?

  Well, when the first Cro-Magnon produced the first stone-tipped wooden spear, none of his hirsute brethren stopped to think about the fact that whole species would be driven to extinction by human hunting. When the United States undertook the Manhattan Project, not one cent was budgeted for considering the societal ramifications of the creation of nuclear weapons—despite the fact that their existence, more than any other single thing, shaped the mindset of the rest of the century.

  But for the Human Genome Project, fully five percent of the total budget is set aside for that thing SF writers love to do the most: just plain old noodling—thinking about the consequences, the impacts, that genetic research will have on society.

  That money is allocated because the world now realizes that such thinking is indispensable. Of course, the general public doesn’t think of it as science fiction—to them, thanks to George “I can’t be bothered to look up the meaning of the word parsec” Lucas, SF is the ultimate in escapism, irrelevant to the real world; it’s fantasy stories that only happened a long time ago, in a galaxy far, far away.

  I’m not alone in this view. Joe Haldeman has observed that Star Wars was the worst thing that ever happened to science fiction, because the general public now equates SF with escapism. According to The American Heritage English Dictionary, escapism is “the avoidance of reality through fantasy or other forms of diversion.” I do not read SF for escapism, although I do read it for entertainment (which is the same reason I do a lot of my non-fiction reading). But I, and most readers of SF, have no interest in avoiding reality.

  And yet, SF is seen as having nothing to do with the real world. At a family reunion in 1998, a great aunt of mine asked me what I’d been doing lately, and I said I’d spent the last several months conducting research for my next science-fiction novel. Well, my aunt, an intelligent, educated woman, screwed up her face, and said, “What possible research could you do for a science-fiction book?” SF to her, as to most of the world, is utterly divorced from reality; it’s just crazy stuff we make up as we go along. And so the bioethicists, the demographers, the futurists, and the analysts, may not think of themselves as using the tools of science fiction—but they are.

  Our mindset—the mindset honed in the pages of Astounding, the legacy of John Brunner and Isaac Asimov, of Judy Merril and Philip K. Dick—is now central to human thought. Science-fiction writers succeeded beyond their wildest dreams: they changed the way humanity looks at the world.

  Years ago, Sam Moskowitz quipped that anyone could have predicted the automobile—but it would take a science-fiction writer to predict the traffic jam. In the 1960s, my fellow Canadian, Marshall McLuhan, made much the same point, saying that, contrary to the designers’ intentions, every new technology starts out as a boon and ends up as an irritant.

  But now, everyone is a science-fiction writer, even if they never spend any time at a keyboard. When a new technology comes along, we all look at it not with the wide eyes of a kid on Christmas morning, but with skepticism. The days when you could tell the public that a microwave oven would replace the traditional stove are long gone; we all know that new technologies aren’t going to live up to the hype. About the only really interesting thing the microwave did was create the microwave-popcorn industry—and, of course, microwave popcorn, fast and convenient, is also loaded down with fatty oils to aid the popping, taking away the health benefits normally associated with that food item. The upside, the downside—popcorn, the science-fictional snack.

  And what I’m talking about is a science-fictional, not a scientific, perspective. As Dr. David Stephenson, formerly with the National Research Council of Canada and a frequent science guest at SF conventions, has observed, scientists are taught from day one to write in the third-person passive voice: they distance themselves from their prose, removing from the discussion both the doer of the action and the person who is feeling the effects of the action.

  But SF writers do what the scientists must not. We long ago left behind the essentially characterless storytelling practiced by such early writers as George O. Smith. We now strive for characterization as sophisticated as that in the best mainstream literature. Or, to put it another way, science fiction has evolved beyond being what its founding editor, Hugo Gernsback, said it should be: merely fiction about science. Indeed, even Isaac Asimov, known for a rather perfunctory approach to characterization, knew full well that SF was about the impact progress has on real people. His definition of science fiction was “that branch of literature that deals with the responses of human beings to changes in science and technology.”

  And those responses, of course, are often irrational, based on fear and ignorance. But they are responses that cannot be ignored: we—science-fiction readers and writers—do share this planet with the ninety percent of human beings who believe in angels, who believe in a literal heaven and hell, who reject evolution. As much as I admire Arthur C. Clarke—and I do, enormously—the most unrealistic thing about his fiction is how darn reasonable everyone is.

  On May 31, 1999, CBC television had me appear on its current-affairs program Midday to discuss whether or not the space program was a waste of money; I was debating a woman who worked in social services who thought all money—including the tiny, tiny fraction of its gross domestic product that Canada, or even the U.S. for that matter, spends on space—sh
ould be used to address problems here on Earth.

  And her clincher argument was this—I swear to God, I’m not making this up: “We should be careful about devoting too much time to science. The people who lived in Atlantis were obsessed with science, and that led to their downfall.”

  My response was to tell her that perhaps if she spent a little more time reading about science, she’d know that Atlantis was a myth, and she wouldn’t make an ass out of herself on national television. But the point here—one that I will come back to—is this: she already understood the central 20th-century science-fictional premise of looking carefully at the ramifications of new technologies, such as space travel. But she was unable to look at them rationally, because of her faulty worldview, a worldview that rendered her incapable of separating myth from reality, fact from fiction.

  If the central message of science fiction has indeed been coopted by the public at large—if, as I think is true, Frank Herbert’s Dune did as much to raise consciousness about ecology as did Rachel Carson’s Silent Spring—then what role is there for science-fiction writers in the new century?

  I always say whenever a discussion at a science-fiction convention brings in Star Trek as an example, we’ve hit rock bottom; you can’t imagine Ruth Rendell turning to Scott Turow at a mystery-fiction conference and saying, “You know, that reminds me of that episode of Murder, She Wrote, in which…” But I am going to invoke Star Trek here as an example of how quaint and embarrassing SF ends up looking when it continues to push an old message long after society has gotten the point.

  In the original Star Trek, we saw women and black people in important positions. Uhura, the mini-skirted bridge officer, was hardly the most significant black example; much more important were the fact that Kirk’s boss, as seen in the episode “Court-Martial,” was a black man, played with quiet dignity by Percy Rodriguez, and that the ship’s computers, as seen in “The Ultimate Computer,” were designed by a Noble-prize-winning black cyberneticist, played with equal dignity by William Marshall.