Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks. Fully account for the benefits of technological advances.
Estimate the lost opportunities of abandoning a technology, and take into account the costs and risks of substituting other credible options, carefully considering widely distributed effects and follow-on effects.
Consider restrictive measures only if the potential impact of an activity has both significant probability and severity. In such cases, if the activity also generates benefits, discount the impacts according to the feasibility of adapting to the adverse effects. If measures to limit technological advance do appear justified, ensure that the extent of those measures is proportionate to the extent of the probable effects.
When choosing among measures to restrict technological innovation, prioritize decision criteria as follows: Give priority to risks to human and other intelligent life over risks to other species; give non-lethal threats to human health priority over threats limited to the environment (within reasonable limits); give priority to immediate threats over distant threats; prefer the measure with the highest expectation value by giving priority to more certain over less certain threats, and to irreversible or persistent impacts over transient impacts.
25. Martin Rees, Our Final Hour: A Scientist’s Warning: How Terror, Error, and Environmental Disaster Threaten Humankind’s Future in This Century — on Earth and Beyond (New York: Basic Books, 2003).
26. Scott Shane, Dismantling Utopia: How Information Ended the Soviet Union (Chicago: Ivan R. Dee, 1994); see also the review by James A. Dorn at http://www.cato.org/pubs/journal/cj16n2-7.html.
27. See George DeWan, “Diary of a Colonial Housewife,” Newsday, 2005, for one account of the difficulty of human life a couple of centuries ago: http://www.newsday.com/community/guide/lihistory/ny-history-hs331a,0,6101197.story.
28. Jim Oeppen and James W. Vaupel, “Broken Limits to Life Expectancy,” Science 296.5570 (May 10, 2002): 1029–31.
29. Steve Bowman and Helit Barel, Weapons of Mass Destruction: The Terrorist Threat, Congressional Research Service Report for Congress, December 8, 1999, http://www.cnie.org/nle/crsreports/international/inter-75.pdf.
30. Eliezer S. Yudkowsky, “Creating Friendly AI 1.0, The Analysis and Design of Benevolent Goal Architectures” (2001), The Singularity Institute, http://www.singinst.org/CFAI/; Eliezer S. Yudkowsky, “What Is Friendly AI?” May 3, 2001, http://www.KurzweilAI.net/meme/frame.html?main=/articles/art0172.html.
31. Ted Kaczynski, “The Unabomber’s Manifesto,” May 14, 2001, http://www.KurzweilAI.net/meme/frame.html?main=/articles/art0182.html.
32. Bill McKibben, Enough: Staying Human in an Engineered Age (New York: Times Books, 2003).
33. Kaczynski, “The Unabomber’s Manifesto.”
34. Foresight Institute and IMM, “Foresight Guidelines on Molecular Nanotechnology,” February 21, 1999, http://www.foresight.org/guidelines/current.html; Christine Peterson, “Molecular Manufacturing: Societal Implications of Advanced Nanotechnology,” April 9, 2003, http://www.KurzweilAI.net/meme/frame.html? main=/articles/art0557.html; Chris Phoenix and Mike Treder, “Safe Utilization of Advanced Nanotechnology,” January 28, 2003, http://www.KurzweilAI.net/meme/frame.html?main=/articles/art0547.html; Robert A. Freitas Jr., “The Gray Goo Problem,” KurzweilAI.net, 20 March 2002, http://www.KurzweilAI.net/meme/frame.html?main=/articles/art0142.html.
35. Robert A. Freitas Jr., private communication to Ray Kurzweil, January 2005. Freitas describes his proposal in detail in Robert A. Freitas Jr., “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations.”
36. Ralph C. Merkle, “Self Replicating Systems and Low Cost Manufacturing,” 1994, http://www.zyvex.com/nanotech/selfRepNATO.html.
37. Neil King Jr. and Ted Bridis, “FBI System Covertly Searches E-mail,” Wall Street Journal Online (July 10, 2000), http://zdnet.com.com/2100-11-522071.html?legacy=zdnn.
38. Patrick Moore, “The Battle for Biotech Progress—GM Crops Are Good for the Environment and Human Welfare,” Greenspirit (February 2004), http://www.greenspirit.com/logbook.cfm?msid=62.
39. “GMOs: Are There Any Risks?” European Commission (October 9, 2001), http://europa.eu.int/comm/research/biosociety/pdf/
gmo_press_release.pdf.
40. Rory Carroll, “Zambians Starve As Food Aid Lies Rejected,” Guardian (October 17, 2002), http://www.guardian.co.uk/gmdebate/Story/0,2763,813220,00.html.
41. Larry Thompson, “Human Gene Therapy: Harsh Lessons, High Hopes,” FDA Consumer Magazine (September–October 2000), http://www.fda.gov/fdac/features/2000/500_gene.html.
42. Bill Joy, “Why the Future Doesn’t Need Us.”
43. The Foresight Guidelines (Foresight Institute, version 4.0, October 2004, http://www.foresight.org/guidelines/current.html) are designed to address the potential positive and negative consequences of nanotechnology. They are intended to inform citizens, companies, and governments, and provide specific guidelines to responsibly develop nanotechnology-based molecular manufacturing. The Foresight Guidelines were initially developed at the Institute Workshop on Molecular Nanotechnology Research Policy Guidelines, sponsored by the institute and the Institute for Molecular Manufacturing (IMM), February 19–21, 1999. Participants included James Bennett, Greg Burch, K. Eric Drexler, Neil Jacobstein, Tanya Jones, Ralph Merkle, Mark Miller, Ed Niehaus, Pat Parker, Christine Peterson, Glenn Reynolds, and Philippe Van Nedervelde. The guidelines have been updated several times.
44. Martine Rothblatt, CEO of United Therapeutics, has proposed replacing this moratorium with a regulatory regime in which a new International Xenotrans-plantation Authority inspects and approves pathogen-free herds of genetically engineered pigs as acceptable sources of xenografts. Rothblatt’s solution also helps stamp out rogue xenograft surgeons by promising each country that joins the IXA, and helps to enforce the rules within its borders, a fair share of the pathogen-free xenografts for its own citizens suffering from organ failure. See Martine Rothblatt, “Your Life or Mine: Using Geoethics to Resolve the Conflict Between Public and Private Interests,” in Xenotransplantation (Burlington, Vt.: Ashgate, 2004). Disclosure: I am on the board of directors of United Therapeutics.
45. See Singularity Institute, http://www.singinst.org. Also see note 30 above. Yudkowsky formed the Singularity Institute for Artificial Intelligence (SIAI) to develop “Friendly AI,” intended to “create cognitive content, design features, and cognitive architectures that result in benevolence” before near-human or better-than-human AIs become possible. SIAI has developed The SIAI Guidelines on Friendly AI: “Friendly AI,” http://www.singinst.org/friendly/. Ben Goertzel and his Artificial General Intelligence Research Institute have also examined issues related to developing friendly AI; his current focus is on developing the Novamente AI Engine, a set of learning algorithms and architectures. Peter Voss, founder of Adaptive A.I., Inc., has also collaborated on friendly-AI issues: http://adaptiveai.com/.
46. Integrated Fuel Cell Technologies, http://ifctech.com. Disclosure: The author is an early investor in and adviser to IFCT.
47. New York Times, September 23, 2003, editorial page.
48. The House Committee on Science of the U.S. House of Representatives held a hearing on April 9, 2003, to “examine the societal implications of nanotechnology and H.R. 766, the Nanotechnology Research and Development Act of 2002.” See “Full Science Committee Hearing on the Societal Implications of Nanotechnology,” http://www.house.gov/science/hearings/full03/index.htm, and “Hearing Transcript,” http://commdocs.house.gov/committees/science/hsy86340.000/
hsy86340_0f.htm. For Ray Kurzweil’s testimony, see also http://www.KurzweilAI.net/meme/frame.html?main=/articles/art0556.html. Also see Amara D. Angelica, “Congressional Hearing Addresses Public Concerns About Nanotech,” April 14, 2003, http://www.KurzweilAI.net/articles/art0558.html.
Chapter Nine: Response to Critics
1. Michael Denton, “Organism and Machine,” in Jay W. Richards et al., Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong A.I. (Seattle: Discovery Institute Press, 2002), http://www.KurzweilAI.net/meme/frame.html?main=/articles/art0502.html.
2. Jaron Lanier, “One Half of a Manifesto,” Edge (September 25, 2000), http://www.edge.org/documents/archive/edge74.html.
3. Ibid.
4. See chapters 5 and 6 for examples of narrow AI now deeply embedded in our modern infrastructure.
5. Lanier, “One Half of a Manifesto.”
6. An example is Kurzweil Voice, developed originally by Kurzweil Applied Intelligence.
7. Alan G. Ganek, “The Dawning of the Autonomic Computing Era,” IBM Systems Journal (March 2003), http://www.findarticles.com/p/articles/mi_m0ISJ/is_1_42/
ai_98695283/print.
8. Arthur H. Watson and Thomas J. McCabe,“Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric,” NIST special publication 500–35, Computer Systems Laboratory, National Institute of Standards and Technology, 1996.
9. Mark A. Richards and Gary A. Shaw, “Chips, Architectures and Algorithms: Reflections on the Exponential Growth of Digital Signal Processing Capability,” submitted to IEEE Signal Processing, December 2004.
10. Jon Bentley, “Programming Pearls,” Communications of the ACM 27.11 (November 1984): 1087–92.
11.C. Eldering, M. L. Sylla, and J. A. Eisenach, “Is There a Moore’s Law for Bandwidth,” IEEE Communications (October 1999): 117–21.
12. J. W.Cooley and J. W. Tukey, “An Algorithm for the Machine Computation of Complex Fourier Series,” Mathematics of Computation 19 (April 1965): 297–301.
13. There are an estimated 100 billion neurons with an estimated interneuronal connection “fan out” of about 1,000, so there are about 100 trillion (1014) connections. Each connection requires at least 70 bits to store an ID for the two neurons at either end of the connection. So that’s approximately 1016 bits. Even the uncom-pressed genome is about 6 billion bits (about 1010), a ratio of at least 106: 1. See chapter 4.
14. Robert A. Freitas Jr., Nanomedicine, vol. I, Basic Capabilities, section 6.3.4.2, “Bio-logical Chemomechanical Power Conversion” (Georgetown, Tex.: Landes Bioscience, 1999), pp. 147–48, http://www.nanomedicine.com/NMI/6.3.4.2.htm#p4; see illustration at http://www.nanomedicine.com/NMI/Figures/6.2.jpg.
15. Richard Dawkins, “Why Don’t Animals Have Wheels?” Sunday Times, November 24, 1996, http://www.simonyi.ox.ac.uk/dawkins/WorldOfDawkins-archive/Dawkins/Work/Articles/1996-11-24wheels.shtml.
16. Thomas Ray, “Kurzweil’s Turing Fallacy,” in Richards et al., Are We Spiritual Machines?
17. Ibid.
18. Anthony J. Bell, “Levels and Loops: The Future of Artificial Intelligence and Neuroscience,” Philosophical Transactions of the Royal Society of London B 354 (1999): 2013–20, http://www.cnl.salk.edu/~tony/ptrsl.pdf.
19. Ibid.
20. David Dewey, “Introduction to the Mandelbrot Set,” http://www.ddewey.net/mandelbrot.
21.Christof Koch quoted in John Horgan, The End of Science (Reading, Mass.: Addison-Wesley, 1996).
22. Roger Penrose, Shadows of the Mind: A Search for the Missing Science of Consciousness (New York: Oxford University Press, 1996); Stuart Hameroff and Roger Penrose, “Orchestrated Objective Reduction of Quantum Coherence in Brain Microtubules: The ‘Orch OR’ Model for Consciousness,” Mathematics and Computer Simulation 40 (1996): 453–80, http://www.quantumconsciousness.org/penrosehameroff/orchOR.html.
23. Sander Olson, “Interview with Seth Lloyd,” November 17, 2002, http://www.nanomagazine.com/i.php?id=2002_11_17.
24. Bell, “Levels and Loops.”
25. See the exponential growth of computing graphs in chapter 2 (pp. 67, 70).
26. Alfred N. Whitehead and Bertrand Russell, Principia Mathematica, 3 vols. (Cam-bridge, U.K.: Cambridge University Press, 1910, 1912, 1913).
27. Gödel’s incompleteness theorem first appeared in his “Uber formal unenscheider-bare Satze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik und Physik 38 (1931): 173–98.
28. Alan M. Turing, “On Computable Numbers with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society 42 (1936): 230–65. The “Entscheidungsproblem” is the decision or halting problem—that is, how to determine ahead of time whether an algorithm will halt (come to a decision) or continue in an infinite loop.
29.Church’s version appeared in Alonzo Church, “An Unsolvable Problem of Elementary Number Theory,” American Journal of Mathematics 58 (1936): 345–63.
30. For an entertaining introductory account of some of the implications of the Church-Turing thesis, see Douglas R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York: Basic Books, 1979).
31. The busy-beaver problem is one example of a large class of noncomputable functions, as seen in Tibor Rado, “On Noncomputable Functions,” Bell System Technical Journal 41.3 (1962): 877–84.
32. Ray, “Kurzweil’s Turing Fallacy.”
33. Lanier, “One Half of a Manifesto.”
34. A human, that is, who is not asleep and not in a coma and of sufficient development (that is, not a prebrain fetus) to be conscious.
35. John R. Searle, “I Married a Computer,” in Richards et al., Are We Spiritual Machines?
36. John R. Searle, The Rediscovery of the Mind (Cambridge, Mass.: MIT Press, 1992).
37. Hans Moravec, Letter to the Editor, New York Review of Books, http://www.kurzweiltech.com/Searle/searle_response_letter.htm.
38. John Searle to Ray Kurzweil, December 15, 1998.
39. Lanier, “One Half of a Manifesto.”
40. David Brooks, “Good News About Poverty,” New York Times November 27, 2004, A35.
41. Hans Moravec, Letter to the Editor, New York Review of Books, http://www.kurzweiltech.com/Searle/searle_response_letter.htm.
42. Patrick Moore, “The Battle for Biotech Progress—GM Crops Are Good for the Environment and Human Welfare,” Greenspirit (February 2004), www.greenspirit.com/logbook.cfm?msid=62.
43. Joel Cutcher-Gershenfeld, private communication to Ray Kurzweil, February 2005.
44. William A. Dembski, “Kurzweil’s Impoverished Spirituality,” in Richards et al., Are We Spiritual Machines?
45. Denton, “Organism and Machine.”
Epilogue
1. As quoted in James Gardner, “Selfish Biocosm,” Complexity 5.3 (January–February 2000): 34–45.
2. In the function y = 1/x, if x = 0, then the function is literally undefined, but we can show that the value of y exceeds any finite number. We can transform y = 1/x into x = 1/y by flipping the nominator and denominator of both sides of the equation. So if we set y to a large finite number, then we can see that x becomes very small but not zero, no matter how big y gets. So the value of y in y = 1/x can be seen to exceed any finite value for y if x = 0. Another way to express this is that we can exceed any possible finite value of y by setting x to be greater than 0 but smaller than 1 divided by that value.
3. With estimates of 1016 cps for functional simulation of the human brain (see chapter 3) and about 1010 (under ten billion) human brains, that’s 1026 cps for all biological human brains. So 1090 cps exceeds this by a factor of 1064.If we use the more conservative figure of 1019 cps, which I estimated was necessary to simulate each nonlinearity in each neuron component (dendrite, axon, and so on), we get a factor of 1061.A trillion trillion trillion trillion trillion is 1060.
4. See the estimates in the preceding note; 1042 cps exceeds this by a factor of ten thousand trillion (1016).
5. Stephen Jay Gould, “Jove’s Thunderbolts,” Natural History 103.10 (October 1994): 6–12; chapter 13 in Dinosaur in a Haystack: Reflections in Natural History (New York:Harmony Books, 1995).
* * *
Index
Page numbers in italics refer to illustrations.
aaATIII, 222 r />
Abbott, A., 585n
Abbott, Larry, 170, 542n
Abbott Laboratories, 282
ABC News, 393
Abduljalil, A. M., 540n
Abeln, G. C., 562n, 563n
Abrams tanks, 332, 335
abstraction, 16, 175, 198
“Accelerating Change” conference, 504n
accelerating returns, law of, 3, 7–14, 29, 35–110, 371, 373, 432, 441, 457, 507n–526n
communications and, 35, 48–50, 48–50, 73, 76–77, 77, 97, 102, 245–246, 511n–512n
computer memory and, 57–58, 57, 58, 59, 75–76, 75, 76, 96, 102
conservatism of social institutions and, 472–473
DNA sequencing and, 73–74, 73, 74, 514n
economic growth and, 96–110, 433, 524n–526n
ETI and, 344
exponential growth in, 3, 7–14, 35, 40–46, 56–84, 57–65, 67, 69–71, 73–84, 96–101, 98, 99, 101, 106–110, 108, 257, 498n
farsighted evolution and, 47–50, 48–50
fractal designs and, 46–47
information, order, and evolution and, 85–94, 516n–523n
intelligence and, 265, 344, 349–351
Internet and, 78–81, 78–81, 95, 97, 516n
life cycle of a paradigm and, 43–46
life cycle of technology and, 51–56
miniaturization and, 42–43, 45, 57–61, 57–60, 73, 82–84, 82–84, 96, 102, 227
Moore’s Law and, see Moore’s Law
nature of order and, 36–43
principles of, 40–43
revisiting of, 491–496, 504n
second law of thermodynamics and, 39–40
world hunger solution and, 224
see also exponential growth
acceleration, 10, 165
actin, 175, 199–200, 383
Acura RL, 287
acute myeloblastic leukemia, 215
adenosine triphosphate (ATP), 118, 232, 234, 238, 306, 399