By the middle of the twenty-first century humans will be able to expand their thinking without limit. This is a form of immortality, although it is important to point out that data and information do not necessarily last forever: the longevity of information depends on its relevance, utility, and accessibility. If you’ve ever tried to retrieve information from an obsolete form of data storage in an old, obscure format (for example, a reel of magnetic tape from a 1970 minicomputer), you understand the challenges in keeping software viable. However, if we are diligent in maintaining our mind file, making frequent backups, and porting to current formats and mediums, a form of immortality can be attained, at least for software-based humans. Later in this century it will seem remarkable to people that humans in an earlier era lived their lives without a backup of their most precious information: that contained in their brains and bodies.
Is this form of immortality the same concept as a physical human, as we know it today, living forever? In one sense it is, because today one’s self is not a constant collection of matter, either. Recent research shows that even our neurons, thought to be relatively long lasting, change all of their constituent subsystems, such as the tubules, in a matter of weeks. Only our pattern of matter and energy persists, and even that gradually changes. Similarly, it will be the pattern of a software human that persists and develops and slowly alters.
But is that person based on my mind file, who migrates across many computational substrates and who outlives any particular thinking medium, really me? This consideration takes us back to the same questions of consciousness and identity that have been debated since Plato’s dialogues (which we examine in the next chapter). During the course of the twenty-first century these will not remain topics for polite philosophical debates but will have to be confronted as vital, practical, political, and legal issues.
A related question: Is death desirable? The “inevitability” of death is deeply ingrained in human thinking. If death seems unavoidable, we have little choice but to rationalize it as necessary, even ennobling. The technology of the Singularity will provide practical and accessible means for humans to evolve into something greater, so we will no longer need to rationalize death as a primary means of giving meaning to life.
The Longevity of Information
“The horror of that moment,” the King went on, “I shall never, never forget it!” “You will, though,” the Queen said, “if you don’t make a memorandum of it.”
—LEWIS CARROLL, THROUGH THE LOOKING-GLASS
The only things you can be sure of, so the saying goes, are death and taxes—but don’t be too sure about death.
—JOSEPH STROUT, NEUROSCIENTIST
I do not know sire, but whatever they will turn out to be I am sure you will tax them.
—MICHAEL FARADAY, RESPONDING TO A QUESTION FROM THE BRITISH EXCHEQUER AS TO WHAT PRACTICAL USE COULD BE MADE OF HIS DEMONSTRATION OF ELECTROMAGNETISM
Do not go gentle into that good night, . . .
Rage, rage against the dying of the light.
—DYLAN THOMAS
The opportunity to translate our lives, our history, our thoughts, and our skills into information raises the issue of how long information lasts. I’ve always revered knowledge and gathered information of all kinds as a child, an inclination I shared with my father.
By way of background, my father was one of those people who liked to store all the images and sounds that documented his life. Upon his untimely death at the age of fifty-eight in 1970, I inherited his archives, which I treasure to this day. I have my father’s 1938 doctoral dissertation from the University of Vienna, which contains his unique insights into the contributions of Brahms to our musical vocabulary. There are albums of neatly arranged newspaper clippings of his acclaimed musical concerts as a teenager in the hills of Austria. There are urgent letters to and from the American music patron who sponsored his flight from Hitler, just before Kristallnacht and related historical developments in Europe in the late 1930s made such escape impossible. These items are among dozens of aging boxes containing a myriad of remembrances, including photographs, musical recordings on vinyl and magnetic tape, personal letters, and even old bills.
I also inherited his penchant for preserving the records of one’s life, so along with my father’s boxes I have several hundred boxes of my own papers and files. My father’s productivity, assisted only by the technology of his manual typewriter and carbon paper, cannot compare with my own prolificacy, aided and abetted by computers and high-speed printers that can reproduce my thoughts in all kinds of permutations.
Tucked away in my own boxes are also various forms of digital media: punch cards, paper-tape reels, and digital magnetic tapes and disks of various sizes and formats. I often wonder just how accessible this information remains. Ironically the ease of approaching this information is inversely proportional to the level of advancement of the technology used to create it. Most straightforward are the paper documents, which although showing signs of age are eminently readable. Only slightly more challenging are the vinyl records and analog tape recordings. Although some basic equipment is required, it is not difficult to find or use. The punch cards are somewhat more challenging, but it’s still possible to find punch-card readers, and the formats are uncomplicated.
By far the most demanding information to retrieve is that contained on the digital disks and tapes. Consider the challenges involved. For each medium I have to figure out exactly which disk or tape drive was used, whether an IBM 1620 circa 1960 or a Data General Nova I circa 1973. Then, once I’ve assembled the requisite equipment, there are layers of software to deal with: the appropriate operating system, disk information drivers, and application programs. And, when I run into the inevitable scores of problems inherent in each layer of hardware and software, just whom am I going to call for assistance? It’s hard enough getting contemporary systems to work, let alone systems for which the help desks were disbanded decades ago (if they ever existed). Even at the Computer History Museum most of the devices on display stopped functioning many years ago.41
Assuming I do prevail against all of these obstacles, I have to account for the fact that the actual magnetic data on the disks has probably decayed and that the old computers would still generate mostly error messages.42 But is the information gone? The answer is, Not entirely. Even though the magnetic spots may no longer be readable by the original equipment, the faded regions could be enhanced by suitably sensitive equipment, via methods that are analogous to the image enhancement often applied to the pages of old books when they are scanned. The information is still there, although very difficult to get at. With enough devotion and historical research, one might actually retrieve it. If we had reason to believe that one of these disks contained secrets of enormous value, we would probably succeed in recovering the information.
But mere nostalgia is unlikely to be sufficient to motivate anyone to undertake this formidable task. I will say that because I did largely anticipate this dilemma, I did make paper printouts of most of these old files. But keeping all our information on paper is not the answer, as hard-copy archives present their own set of problems. Although I can readily read even a century-old paper manuscript if I’m holding it in my hand, finding a desired document from among thousands of only modestly organized file folders can be a frustrating and time-consuming task. It can take an entire afternoon to locate the right folder, not to mention the risk of straining one’s back from moving dozens of heavy file boxes. Using microfilm or microfiche may alleviate some of the difficulty, but the matter of locating the right document remains.
I have dreamed of taking these hundreds of thousands of records and scanning them into a massive personal database, which would allow me to utilize powerful contemporary search-and-retrieve methods on them. I even have a name for this venture—DAISI (Document and Image Storage Invention)—and have been accumulating ideas for it for many years. Computer pioneer Gordon Bell (former chief engineer of Digital Equipment Corpo
ration), DARPA (Defense Advanced Research Projects Agency), and the Long Now Foundation are also working on systems to address this challenge.43
DAISI will involve the rather daunting task of scanning and patiently cataloging all these documents. But the real challenge to my dream of DAISI is surprisingly deep: how can I possibly select appropriate hardware and software layers that will give me the assurance that my archives will be viable and accessible decades from now?
Of course my own archival needs are only a microcosm of the exponentially expanding knowledge base that human civilization is accumulating. It is this shared specieswide knowledge base that distinguishes us from other animals. Other animals communicate, but they don’t accumulate an evolving and growing base of knowledge to pass down to the next generation. Since we are writing our precious heritage in what medical informatics expert Bryan Bergeron calls “disappearing ink,” our civilization’s legacy would appear to be at great risk.44 The danger appears to be growing exponentially along with the growth of our knowledge bases. The problem is further exacerbated by the accelerating speed with which we adopt new standards in the many layers of hardware and software we employ to store information.
There is another valuable repository of information stored in our brains. Our memories and skills, although they may appear to be fleeting, do represent information, coded in vast patterns of neurotransmitter concentrations, interneuronal connections, and other relevant neural details. This information is the most precious of all, which is one reason death is so tragic. As we have discussed, we will ultimately be able to access, permanently archive, as well as understand the thousands of trillions of bytes of information we have tucked away in each of our brains.
Copying our minds to other mediums raises a number of philosophical issues, which I will discuss in the next chapter—for example, “Is that really me or rather someone else who just happens to have mastered all my thoughts and knowledge?” Regardless of how we resolve these issues, the idea of capturing the information and information processes in our brains seems to imply that we (or at least entities that act very much like we do) could “live forever.” But is that really the implication?
For eons the longevity of our mental software has been inexorably linked to the survival of our biological hardware. Being able to capture and reinstantiate all the details of our information processes would indeed separate these two aspects of our mortality. But as we have seen, software itself does not necessarily survive forever, and there are formidable obstacles to its enduring very long at all.
So whether information represents one man’s sentimental archive, the accumulating knowledge base of the human-machine civilization, or the mind files stored in our brains, what can we conclude about the ultimate longevity of software? The answer is simply this: Information lasts only so long as someone cares about it. The conclusion that I’ve come to with regard to my DAISI project, after several decades of careful consideration, is that there is no set of hardware and software standards existing today, nor any likely to come along, that will provide any reasonable level of confidence that the stored information will still be accessible (without unreasonable levels of effort) decades from now.45 The only way that my archive (or any other information base) can remain viable is if it is continually upgraded and ported to the latest hardware and software standards. If an archive remains ignored, it will ultimately become as inaccessible as my old eight-inch PDP-8 floppy disks.
Information will continue to require constant maintenance and support to remain “alive.” Whether data or wisdom, information will survive only if we want it to. By extension, we can only live for as long as we care about ourselves. Already our knowledge to control disease and aging is advanced to the point that your attitude toward your own longevity is now the most important influence on your long-range health.
Our civilization’s trove of knowledge does not simply survive by itself. We must continually rediscover, reinterpret, and reformat the legacy of culture and technology that our forebears have bestowed on us. All of this information will be fleeting if no one cares about it. Translating our currently hardwired thoughts into software will not necessarily provide us with immortality. It will simply place the means to determine how long we want our lives and thoughts to last in our own figurative hands.
MOLLY 2004: So what you’re saying is that I’m just a file?
MOLLY 2104: Well, not a static file, but a dynamic file. But what do you mean “just”? What could be more important?
MOLLY 2004: Well, I throw files away all the time, even dynamic ones.
MOLLY 2104: Not all files are created equal.
MOLLY 2004: I suppose that’s true. I was devastated when I lost my only copy of my senior thesis. I lost six months of work and had to start over.
MOLLY 2104: Ah, yes, that was awful. I remember it well, even though it was over a century ago. It was devastating because it was a small part of myself. I had invested my thoughts and creativity in that file of information. So think how precious all of your—my—accumulated thoughts, experience, skills, and history are.
. . . on Warfare: The Remote, Robotic, Robust, Size-Reduced, Virtual-Reality Paradigm
As weapons have become more intelligent, there has been a dramatic trend toward more precise missions with fewer casualties. It may not seem that way when viewed alongside the tendency toward more detailed, realistic television-news coverage. The great battles of World Wars I and II and the Korean War, in which tens of thousands of lives were lost over the course of a few days, were visually recorded only by occasional grainy newsreels. Today, we have a front-row seat for almost every engagement. Each war has its complexities, but the overall movement toward precision intelligent warfare is clear by examining the number of casualties. This trend is similar to what we are beginning to see in medicine, where smart weapons against disease are able to perform specific missions with far fewer side effects. The trend is similar for collateral casualties, although it may not seem that way from contemporary media coverage (recall that about fifty million civilians died in World War II).
I am one of five members of the Army Science Advisory Group (ASAG), which advises the U.S. Army on priorities for its science research. Although our briefings, deliberations, and recommendations are confidential, I can share some overall technological directions that are being pursued by the army and all of the U.S. armed forces.
Dr. John A. Parmentola, director for research and laboratory management for the U.S. Army and liaison to the ASAG, describes the Department of Defense’s “transformation” process as a move toward an armed force that is “highly responsive, network-centric, capable of swift decision, superior in all echelons, and [able to provide] overwhelming massed effects across any battle space.”46 He describes the Future Combat System (FCS), now under development and scheduled to roll out during the second decade of this century, as “smaller, lighter, faster, more lethal, and smarter.”
Dramatic changes are planned for future war-fighting deployments and technology. Although details are likely to change, the army envisions deploying Brigade Combat Teams (BCTs) of about 2,500 soldiers, unmanned robotic systems, and FCS equipment. A single BCT would represent about 3,300 “platforms,” each with its own intelligent computational capabilities. The BCT would have a common operating picture (COP) of the battlefield, which would be appropriately translated for it, with each soldier receiving information through a variety of means, including retinal (and other forms of “heads up”) displays and, in the future, direct neural connection.
The army’s goal is to be capable of deploying a BCT in 96 hours and a full division in 120 hours. The load for each soldier, which is now about one hundred pounds of equipment, will initially be reduced through new materials and devices to forty pounds, while dramatically improving effectiveness. Some of the equipment would be offloaded to “robotic mules.”
A new uniform material has been developed using a novel form of Kevlar with silica nanoparticles suspended in poly
ethylene glycol. The material is flexible in normal use, but when stressed it instantly forms a nearly impenetrable mass that is stab resistant. The army’s Institute for Soldier Nanotechnologies at MIT is developing a nanotechnology-based material called “exomuscle” to enable combatants to greatly increase their physical strength when manipulating heavy equipment.47
The Abrams tank has a remarkable survival record, with only three combat casualties in its twenty years of combat use. This is the result of both advanced armor materials and of intelligent systems designed to defeat incoming weapons, such as missiles. However, the tank weighs more than seventy tons, a figure that will need to be significantly reduced to meet FCS goals for smaller systems. New lightweight yet ultrastrong nanomaterials (such as plastics combined with nanotubes, which are fifty times stronger than steel), as well as increased computer intelligence to counteract missile attacks, are expected to dramatically lower the weight of ground combat systems.
The trend toward unmanned aerial vehicles (UAVs), which started with the armed Predator in the recent Afghanistan and Iraq campaigns, will accelerate. Army research includes the development of micro-UAVs the size of birds that will be fast, accurate, and capable of performing both reconnaissance and combat missions. Even smaller UAVs the size of bumblebees are envisioned. The navigational ability of an actual bumblebee, which is based on a complex interaction between its left and right vision systems, has recently been reverse engineered and will be applied to these tiny flying machines.