During the era of Martin Luther King and the Watts riots, it was a powerful, important statement to have the white captain of the Enterprise deferring to black people; as Marshall observed thirty years later, the single most significant thing about his guest-starring role was that he, an African-American, was referred to as “Sir” throughout the episode.
But time passed. In 1993, Paramount made much of the fact that we were going to see a black man as the leader on Star Trek: Deep Space Nine—despite the fact that, by this point, blacks had been elected to prominent political positions throughout the United States, and even in South Africa, a bastion of racism in the 1960s, a black man, Nelson Mandela, was about to become president. But, somehow, Star Trek thought it was making a profound statement.
And then, just as embarrassingly, two years later, we were supposed to be stunned by the fact that on Star Trek: Voyager; a woman was the captain of a starship—this, despite the fact that countries from Great Britain to India to Canada had all already had female prime ministers, that women had risen to prominence in all walks of life.
My colleagues and I have long tried to reflect reality in our fiction, and so, naturally, we have diverse casts in our stories. Damon Knight’s famous statement that the most unrealistic thing about science fiction is the preponderance of Americans—practically no one, he correctly observed, is an American—was no longer news to anybody. And, by all means, in a Star Trek of the 1990s we should indeed have seen women and non-whites in prominent roles. But to make it the message, to try to pass it off as a gutsy thing to do, looked ridiculous.
Indeed, David Gerrold famously quit working on Star Trek: The Next Generation back in 1987 in part because of that series’ failure to address the reality that a lot of people are gay in its depiction of the future; Star Trek had become irrelevant, because the only messages it was comfortable sending out were ones already fully received by the audience.
And, I firmly believe, SF as a whole is now in danger of being perceived as just as quaint, just as dated, just as irrelevant, as the current Star Trek is.
In our search for a new role, should we fall back on the one the media has so often cast us in—that of predictors of the future? I don’t think so. Many SF writers, myself included, are content to occasionally call themselves “futurists,” if that helps get us TV or radio interviews, but we aren’t really (indeed, I’m not sure that anybody really is, in the modern sense of the term, as someone who claims to be able to predict future trends; Bill Gates is the world’s current technological leader—a futurist if ever there was one—and he, of course, is the same man who once said that no one would ever need a computer with more than 640K of memory).
No, when what we science-fiction writers have written about comes to pass, it usually means society has screwed up. The last thing George Orwell wanted was for the real year 1984 to turn out anything like the vision portrayed in his novel.
Orwell, of course, wrote his book in 1948—he simply reversed the last two digits to make it clear that he was really writing about his present day. Science fiction is indeed very much a literature of its time, and should, of course, be read in historical context.
Still, anyone who needs further convincing that science fiction isn’t a predictive medium need only look at the events of the last few decades. Numerous science-fiction writers predicted that the first humans would set foot upon the moon in the 1960s, but none of us predicted that we would abandon the moon—indeed, all manned travel beyond Earth orbit—just three years later. Exactly twelve human beings have walked on the Moon; a mere dozen people (all white, all male, all American—hardly a representative sampling, but, then again, all of this occurred back when the original Star Trek’s message of an interracial future was one that hadn’t yet been fully received)—and there is no sign that that number will increase in the next couple of decades.
We science-fiction writers also utterly missed the fall of the Soviet Union, something that now, in retrospect, seemed inevitable—indeed, it was amazing it lasted as long as it did. But we were writing books like Norman Spinrad’s Russian Spring right up to the day of the collapse.
And, perhaps most significant of all, we completely missed the rise of the Internet and the World Wide Web. The genre that gave us Isaac Asimov’s Multivac, Arthur C. Clarke’s Hal 9000, Robert A. Heinlein’s Mycroft Holmes, and even William Gibson’s Wintermute completely failed to predict how the computer revolution was really going to unfold.
Of course when something new comes along—such as the terrible plague of AIDS—we’re quick to weigh in with speculations. But we’re usually so far off the mark that the results end up seeming laughable. Poor Norman Spinrad again: his vision of a world of people having sex with machines—instead of, of course, simply wearing condoms—because of the threat of AIDS, as outlined in his 1988 story “Journals of the Plague Years,” seems absolutely ridiculous and alarmist when we look at it now, a scant decade later.
Some science-fiction writers still gamely try to set stories in the far future—a hundred, two hundred, a thousand years down the road. But the predictive horizon is moving ever closer. No one can make a prediction about what the world will be like even fifty years from now with any degree of confidence. What will be the fruits of the Human Genome Project? Will nanotechnology really work? Will true artificial intelligence emerge? Will cold fusion, or another clean, unlimited energy source, be developed? Will humans upload their consciousnesses into machines? And what wild cards—things we haven’t even thought of yet—will appear?
As Bruce Sterling has observed, people in the future won’t even eat; as Nancy Kress has postulated, with Beggars in Spain, they may not even sleep. What likely predictions could we possibly make about such beings?
In May 1967, Arthur C. Clarke revealed his now-famous “Third Law” during a speech to the American Association of Architects: “Any sufficiently advanced technology is indistinguishable from magic.” The question, of course, is how far ahead of us is “sufficiently advanced”—and the answer, I believe is fifty years; the world of 2050 is utterly beyond our predictive abilities. With the accelerating rate of change, any year-2000 guess as to what 2050 would be like is almost certainly going to be as far off base as a guess Christopher Columbus might have had about what 2000 would be like.
The pressure for SF to change has been building for a long time. In North America, the sales of science fiction books that aren’t related to Star Trek, Star Wars, or other media properties, are the worst they’ve ever been. Sales are down about fifty percent across the board from 1990, and the readerships of the principal SF magazines—Analog and Asimov’s—have been cut in half. There is no doubt that the reading public is turning away from SF in droves.
The prime cause of the decline in SF readers is that today’s young people are finding all the things that have always attracted young people to SF—big ideas, sense of wonder, action, wish-fulfillment fantasies, stunning visual imagery, nifty aliens, engaging characters—more readily in movies, TV, role-playing games, computer games, and on the Internet than in the pages of printed works.
There’s no doubt that we’ve been outclassed in terms of visual imagery by the wizards at Industrial Light and Magic. Any space battle or alien vista we might care to describe they can realize more vibrantly in pictures than we can with words. To put it crudely: in the past, many of our finest SF writers, including Robert Silverberg and Mike Resnick, supplemented their income by writing pornographic novels. But there’s almost no market left for porno fiction: what’s now shown on videotape is much more vivid and real than anything the reader can imagine. Well, as went novels with titles like Nurses in Need, so, too, will go the space opera that was once a staple of printed SF.
SF will have to change if it is to survive. The public wants something other than what we’ve been giving them. One change we’ll likely see is a move away from the far future as a setting for stories. I don’t even think we need to invoke Kim Stanley Robinson’s criterion
that SF stories must be set in the future; I took great pleasure in setting my novel Frameshift, for instance, entirely in the present day, and suspect we’ll see it become much more common for serious SF novels to have contemporary settings.
Indeed, if science fiction is going to have relevancy in the next century, it must assert itself to be part of real life, not far-off tales of escapism. And that brings me back to where we started. We need a new message for the new millennium. Far be it from me to try to impose an agenda on SF—but I think the agenda is already there, implicit in many of our texts, and, indeed, explicit in the actual name of our genre: science fiction.
One of the great intellectual embarrassments of the 20th century is that five hundred years after Copernicus deposed Earth from the centre of the universe, virtually every newspaper carries a daily astrology column—the horoscopes—but astronomy gets, at best, a column once a week, and in many papers not even that.
It’s likewise embarrassing that a hundred and forty years after the publication of The Origin of Species, ignorant people are still succeeding in outlawing the teaching of the fact of evolution.
And it’s mortifying that while the SF section of bookstores shrinks like a puddle under noonday sun, the “New Age” section—full of fabricated stories penned by charlatans—grows like a cancer.
If there is a message science fiction can promulgate for the 21st century—a message that the world needs to hear—it is this: the rational, scientific worldview is the only perspective that effectively deals with reality.
And, at the risk of repeating myself, let me emphasize again that reality is indeed what science fiction is all about, I cringe with embarrassment every time I see that stupid t-shirt not quite concealing a massive belly at a science-fiction convention: “Reality is just a crutch for people who can’t handle science fiction.” What a ridiculous, offensive statement! Science fiction—in its probing of the deep questions, in its abiding concern with moral issues, in its unrelenting quest to expose truth and speculate on consequences, even in its most mind-bending explorations of the quantum nature of the universe—is, more than any other form of entertainment, absolutely about reality.
And reality is the totality of everything; not to invoke Star Trek again, but in the movie Star Trek IV it is revealed that Kiri-kin-tha’s First Law of Metaphysics is that “nothing unreal exists,” a statement no less profound than Descartes’s “I think therefore I am.”
The scientific method is the single greatest tool of understanding ever devised by humanity. Observe phenomena. Propose an explanation for why the phenomena are as they have been seen to be. Devise an experiment to test whether your explanation is correct. And, if that experiment fails—and this is the powerful part; this is where the beauty comes in—discard the explanation, and start over again.
There will be those who argue that there are other ways of gaining insight to the nature of reality: mystic experiences, contemplation in the absence of experimentation, divine insight, consulting ancient texts. Such methods are demonstrably inferior to the scientific method, for only the scientific method welcomes the detection of error; only the scientific method allows for independent verification and replication.
Now, some will say, well, that’s the western view, and, after all, to paraphrase Damon Knight, hardly anyone is a westerner. Maybe so, but it must be recognized that science fiction is, in fact, a western genre. Fantasy, perhaps, can trace roots all over the world, but science fiction, born of Mary Shelley, nurtured by Jules Verne and H. G. Wells, grew out of the industrial revolution. It is inexorably tied up with western thought.
And the scientific method is the crowning glory of western thought—the glory that allowed us not to simply declare, as the United States’s founders did, that it is “self-evident that all men are created equal” while they still held slaves, but rather that allowed us to prove, through genetic studies that showed that genetic variation within races is greater than the average deviation between the races, and through psychological and anatomical studies that showed that the sexes are equally endowed intellectually, that in fact racism and sexism have no rational basis.
Stephen Jay Gould recently wrote a book called Rocks of Ages: Science and Religion in the Fullness of Life, in which he argues that the spiritual and the rational should have a “loving concordant,” but are in fact “nonoverlapping magisteria”—utterly separate fields, with some questions solely appropriate to the former and others exclusively the province of the latter.
I reject that: I don’t think there’s any question, including the most basic philosophical conundrums of where did we come from, why are we here, what does it all mean, and, indeed, the biggest of them all, is there a God, that cannot be most effectively addressed through the application of the scientific method, especially with its absolute requirement that if an idea—such as the superstition of astrology—is disproven, then it must be willingly discarded.
How can science have anything meaningful to say about whether there is a God? Easily. If the universe had an intelligent designer, it will show signs of intelligent design. Some argue that it clearly does: the relative strengths of the four fundamental forces that drive our universe—gravitation, electromagnetism, the strong nuclear force, and the weak nuclear force—do seem to have been chosen with great care, since any substantial deviation from the present ratios would have resulted in a universe devoid of stars or even atoms.
Likewise, the remarkable thermal properties of water—most notably, that it expands as it freezes and that it has higher surface tension than any other fluid except liquid selenium—seem specifically jiggered to make life possible.
Do these facts prove whether or not God exists? No—not yet. But the best response to those who say science doesn’t hold all the answers is to say, on the contrary, science does indeed hold all the answers—we just don’t have all the science yet.
My favourite review of my own work was a recent one for Flashforward by Henry Mietkiewicz in The Toronto Star, who said, “Sawyer compels us to think rationally about questions we normally consider too metaphysical to grapple with.” But I’m hardly alone in this. Science fiction right back to such great works as Arthur C. Clarke’s short story “The Star” and James Blish’s A Case of Conscience, through Carl Sagan’s Contact, and, more recently Mary Doria Russell’s The Sparrow, and, if I may, my own Nebula-winning The Terminal Experiment and Calculating God, show that SF, because it embraces the scientific method, is the most effective tool for exploring the deepest of all questions.
So, does science fiction have a role in the 21st century? Absolutely. If we can help shape the Zeitgeist, help inculcate the belief that rational thought, that discarding superstition, that subjecting all beliefs to the test of the scientific method, is the most reasonable approach to any question, then not only will science fiction have a key role to play in the intellectual development of the new century, but it will also, finally and at last, help humanity shuck off the last vestiges of the supernatural, the irrational, the spurious, the fake, and allow us to embrace, to quote poet Archibald Lampman, “the wide awe and wonder of the night” but with our eyes wide open and our minds fully engaged. Then, finally, some 40,000 years after consciousness first flickered into being on this world, we will at last truly deserve that name we bestowed upon ourselves: Homo sapiens—Man of Wisdom.
AI and Sci-Fi; My, Oh, My!
On May 31, 2002, I gave this keynote address at the 12th Annual Conference on Intelligent Systems (a conference on robotics and artificial intelligence), which was held that year in Calgary.
Of course, I don’t read my speeches word for word; I paraphrase them as I go along. But I discovered that there’s a nice little secondary market for the text of speeches, and this one appeared in the essay collection Taking the Red Pill: Science, Philosophy and Religion in The Matrix (Glenn Yeffeth, editor; Benbella Books, Dallas, April 2003) and also in the October 2002 issue of The New York Review of Science Fiction.
Most
fans of science fiction know Robert Wise’s 1951 movie The Day the Earth Stood Still. It’s the one with Klaatu, the humanoid alien who comes to Washington, D.C., accompanied by a giant robot named Gort, and it contains that famous instruction to the robot: “Klaatu barada nikto.”
Fewer people know the short story upon which that movie is based: “Farewell to the Master,” written in 1941 by Harry Bates.
In both the movie and the short story, Klaatu, despite his message of peace, is shot by human beings. In the short story, the robot—called Gnut, instead of Gort—comes to stand vigil over the body of Klaatu.
Cliff, a journalist who is the narrator of the story, likens the robot to a faithful dog who won’t leave after his master has died. Gnut manages to essentially resurrect his master, and Cliff says to the robot, “I want you to tell your master…that what happened…was an accident, for which all Earth is immeasurably sorry.”
And the robot looks at Cliff and astonishes him by very gently saying, “You misunderstand. I am the master.”
That’s an early science-fiction story about artificial intelligence—in this case, ambulatory AI, enshrined in a mechanical body. But it presages the difficult relationship that biological beings might have with their silicon-based creations.
Indeed, the word robot was coined in a work of science fiction: when Karl Capek was writing his 1920 play RUR—set in the factory of Rossum’s Universal…well, universal what? He needed a name for mechanical laborers, and so he took the Czech word robota and shortened it to “robot.” Robota refers to a debt to a landlord that can only be repaid by forced physical labor. But Capek knew well that the real flesh-and-blood robotniks had rebelled against their landlords in 1848. From the very beginning, the relationship between humans and robots was seen as one that might lead to conflict.