Wolfram starts by describing the universe as a large network of nodes. The nodes do not exist in “space,” but rather space, as we perceive it, is an illusion created by the smooth transition of phenomena through the network of nodes. One can easily imagine building such a network to represent “naive” (Newtonian) physics by simply building a three-dimensional network to any desired degree of granularity. Phenomena such as “particles” and “waves” that appear to move through space would be represented by “cellular gliders,” which are patterns that are advanced through the network for each cycle of computation. Fans of the game Life (which is based on cellular automata) will recognize the common phenomenon of gliders and the diversity of patterns that can move smoothly through a cellular-automaton network. The speed of light, then, is the result of the clock speed of the celestial computer, since gliders can advance only one cell per computational cycle.
Einstein’s general relativity, which describes gravity as perturbations in space itself, as if our three-dimensional world were curved in some unseen fourth dimension, is also straightforward to represent in this scheme. We can imagine a four-dimensional network and can represent apparent curvatures in space in the same way that one represents normal curvatures in three-dimensional space. Alternatively, the network can become denser in certain regions to represent the equivalent of such curvature.
A cellular-automata conception proves useful in explaining the apparent increase in entropy (disorder) that is implied by the second law of thermodynamics. We have to assume that the cellular-automata rule underlying the universe is a class 4 rule (see main text)—otherwise the universe would be a dull place indeed. Wolfram’s primary observation that a class 4 cellular automaton quickly produces apparent randomness (despite its determinate process) is consistent with the tendency toward randomness that we see in Brownian motion and that is implied by the second law.
Special relativity is more difficult. There is an easy mapping from the Newtonian model to the cellular network. But the Newtonian model breaks down in special relativity. In the Newtonian world, if a train is going eighty miles per hour and you drive along it on a parallel road at sixty miles per hour, the train will appear to pull away from you at twenty miles per hour. But in the world of special relativity, if you leave Earth at three quarters of the speed of light, light will still appear to you to move away from you at the full speed of light. In accordance with this apparently paradoxical perspective, both the size and subjective passage of time for two observers will vary depending on their relative speed. Thus, our fixed mapping of space and nodes becomes considerably more complex. Essentially, each observer needs his or her own network. However, in considering special relativity, we can essentially apply the same conversion to our “Newtonian” network as we do to Newtonian space. However, it is not clear that we are achieving greater simplicity in representing special relativity in this way.
A cellular-node representation of reality may have its greatest benefit in understanding some aspects of the phenomenon of quantum mechanics. It could provide an explanation for the apparent randomness that we find in quantum phenomena. Consider, for example, the sudden and apparently random creation of particle-antiparticle pairs. The randomness could be the same sort of randomness that we see in class 4 cellular automata. Although predetermined, the behavior of class 4 automata cannot be anticipated (other than by running the cellular automata) and is effectively random.
This is not a new view. It’s equivalent to the “hidden variables” formulation of quantum mechanics, which states that there are some variables that we cannot otherwise access that control what appears to be random behavior that we can observe. The hidden-variables conception of quantum mechanics is not inconsistent with the formulas for quantum mechanics. It is possible but is not popular with quantum physicists because it requires a large number of assumptions to work out in a very particular way. However, I do not view this as a good argument against it. The existence of our universe is itself very unlikely and requires many assumptions to all work out in a very precise way. Yet here we are.
A bigger question is, How could a hidden-variables theory be tested? If based on cellular-automata-like processes, the hidden variables would be inherently unpredictable, even if deterministic. We would have to find some other way to “unhide” the hidden variables.
Wolfram’s network conception of the universe provides a potential perspective on the phenomenon of quantum entanglement and the collapse of the wave function. The collapse of the wave function, which renders apparently ambiguous properties of a particle (for example, its location) retroactively determined, can be viewed from the cellular-network perspective as the interaction of the observed phenomenon with the observer itself. As observers, we are not outside the network but exist inside it. We know from cellular mechanics that two entities cannot interact without both being changed, which suggests a basis for wave-function collapse.
Wolfram writes, “If the universe is a network, then it can in a sense easily contain threads that continue to connect particles even when the particles get far apart in terms of ordinary space.” This could provide an explanation for recent dramatic experiments showing nonlocality of action in which two “quantum entangled” particles appear to continue to act in concert with each other even though separated by large distances. Einstein called this “spooky action at a distance” and rejected it, although recent experiments appear to confirm it.
Some phenomena fit more neatly into this cellular automata–network conception than others. Some of the suggestions appear elegant, but as Wolfram’s “Note for Physicists” makes clear, the task of translating all of physics into a consistent cellular-automata–based system is daunting indeed.
Extending his discussion to philosophy, Wolfram “explains” the apparent phenomenon of free will as decisions that are determined but unpredictable. Since there is no way to predict the outcome of a cellular process without actually running the process, and since no simulator could possibly run faster than the universe itself, there is therefore no way to reliably predict human decisions. So even though our decisions are determined, there is no way to preidentify what they will be. However, this is not a fully satisfactory examination of the concept. This observation concerning the lack of predictability can be made for the outcome of most physical processes—such as where a piece of dust will fall on the ground. This view thereby equates human free will with the random descent of a piece of dust. Indeed, that appears to be Wolfram’s view when he states that the process in the human brain is “computationally equivalent” to those taking place in processes such as fluid turbulence.
Some of the phenomena in nature (for example, clouds, coastlines) are characterized by repetitive simple processes such as cellular automata and fractals, but intelligent patterns (such as the human brain) require an evolutionary process (or alternatively, the reverse engineering of the results of such a process). Intelligence is the inspired product of evolution and is also, in my view, the most powerful “force” in the world, ultimately transcending the powers of mindless natural forces.
In summary, Wolfram’s sweeping and ambitious treatise paints a compelling but ultimately overstated and incomplete picture. Wolfram joins a growing community of voices that maintain that patterns of information, rather than matter and energy, represent the more fundamental building blocks of reality. Wolfram has added to our knowledge of how patterns of information create the world we experience, and I look forward to a period of collaboration between Wolfram and his colleagues so that we can build a more robust vision of the ubiquitous role of algorithms in the world.
The lack of predictability of class 4 cellular automata underlies at least some of the apparent complexity of biological systems and does represent one of the important biological paradigms that we can seek to emulate in our technology. It does not explain all of biology. It remains at least possible, however, that such methods can explain all of physics. IfWolfram, or anyone else for that matter,
succeeds in formulating physics in terms of cellular-automata operations and their patterns, Wolfram’s book will have earned its title. In any event, I believe the book to be an important work of ontology.
66. Rule 110 states that a cell becomes white if its previous color was, and its two neighbors are, all black or all white, or if its previous color was white and the two neighbors are black and white, respectively; otherwise, the cell becomes black.
67. Wolfram, New Kind of Science, p. 4, http://www.wolframscience.com/nksonline/page-4-text.
68. Note that certain interpretations of quantum mechanics imply that the world is not based on deterministic rules and that there is an inherent quantum randomness to every interaction at the (small) quantum scale of physical reality.
69. As discussed in note 57 above, the uncompressed genome has about six billion bits of information (order of magnitude = 1010 bits), and the compressed genome is about 30 to 100 million bytes. Some of this design information applies, of course, to other organs. Even assuming all of 100 million bytes applies to the brain, we get a conservatively high figure of 109 bits for the design of the brain in the genome. In chapter 3, I discuss an estimate for “human memory on the level of individual interneuronal connections,” including “the connection patterns and neurotransmitter concentrations” of 1018 (billion billion) bits in a mature brain. This is about a billion (109) times more information than that in the genome which describes the brain’s design. This increase comes about from the self-organization of the brain as it interacts with the person’s environment.
70. See the sections “Disdisorder” and “The Law of Increasing Entropy Versus the Growth of Order” in my book The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Viking, 1999), pp. 30–33.
71. A universal computer can accept as input the definition of any other computer and then simulate that other computer. This does not address the speed of simulation, which might be relatively slow.
72. C. Geoffrey Woods, “Crossing the Midline,” Science 304.5676 (June 4, 2004): 1455–56; Stephen Matthews, “Early Programming of the Hypothalamo-Pituitary-Adrenal Axis,” Trends in Endocrinology and Metabolism 13.9 (November 1, 2002): 373–80; Justin Crowley and Lawrence Katz, “Early Development of Ocular Dominance Columns,” Science 290.5495 (November 17, 2000): 1321–24; Anna Penn et al., “Competition in the Retinogeniculate Patterning Driven by Spontaneous Activity,” Science 279.5359 (March 27, 1998): 2108–12.
73. The seven commands of a Turing machine are: (1) Read Tape, (2) Move Tape Left, (3) Move Tape Right, (4) Write 0 on the Tape, (5) Write 1 on the Tape, (6) Jump to Another Command, and (7) Halt.
74. In what is perhaps the most impressive analysis in his book, Wolfram shows how a Turing machine with only two states and five possible colors can be a universal Turing machine. For forty years, we’ve thought that a universal Turing machine had to be more complex than this. Also impressive is Wolfram’s demonstration that rule 110 is capable of universal computation, given the right software. Of course, universal computation by itself cannot perform useful tasks without appropriate software.
75. The “nor” gate transforms two inputs into one output. The output of “nor” is true if and only if neither A nor B is true.
76. See the section “A nor B: The Basis of Intelligence?” in The Age of Intelligent Machines (Cambridge, Mass.: MIT Press, 1990), pp. 152–57, http://www. KurzweilAI.net/meme/frame.html?m=12.
77. United Nations Economic and Social Commission for Asia and the Pacific, “Regional Road Map Towards an Information Society in Asia and the Pacific,” ST/ESCAP/2283, http://www.unescap.org/publications/detail.asp?id=771; Economic and Social Commission for Western Asia, “Regional Profile of the Information Society in Western Asia,” October 8, 2003, http://www.escwa.org.lb/information/publications/ictd/docs/ictd-03-11-e.pdf; John Enger, “Asia in the Global Information Economy: The Rise of Region-States, The Role of Telecommunications,” presentation at the International Conference on Satellite and Cable Television in Chinese and Asian Regions, Communication Arts Research Institute, Fu Jen Catholic University, June 4–6, 1996.
78. See “The 3 by 5 Initiative,” Fact Sheet 274, December 2003, http://www.who.int/mediacentre/factsheets/2003/fs274/en/print.html.
79. Technology investments accounted for 76 percent of 1998 venture-capital investments ($10.1 billion) (PricewaterhouseCoopers news release, “Venture Capital Investments Rise 24 Percent and Set Record at $14.7 Billion, Pricewaterhouse-Coopers Finds,” February 16, 1999). In 1999, technology-based companies cornered 90 percent of venture-capital investments ($32 billion) (PricewaterhouseCoopers news release, “Venture Funding Explosion Continues: Annual and Quarterly Investment Records Smashed, According to PricewaterhouseCoopers Money Tree National Survey,” February 14, 2000). Venture-capital levels certainly dropped during the high-tech recession; but in just the second quarter of 2003, software companies alone attracted close to $1 billion (PricewaterhouseCoopers news release, “Venture Capital Investments Stabilize in Q2 2003,” July 29, 2003). In 1974 in all U.S. manufacturing industries forty-two firms received a total of $26.4 million in venture-capital disbursements (in 1974 dollars, or $81 million in 1992 dollars). Samuel Kortum and Josh Lerner, “Assessing the Contribution of Venture Capital to Innovation,” RAND Journal of Economics 31.4 (Winter 2000): 674–92, http://econ.bu.edu/kortum/rje_Winter’00_Kortum.pdf. As Paul Gompers and Josh Lerner say, “Inflows to venture capital funds have expanded from virtually zero in the mid-1970s. . . .”Gompers and Lerner, The Venture Capital Cycle, (Cambridge, Mass.: MIT Press, 1999). See also Paul Gompers, “Venture Capital,” in B. Espen Eckbo, ed., Handbook of Corporate Finance: Empirical Corporate Finance, in the Handbooks in Finance series (Holland: Elsevier, forthcoming), chapter 11, 2005, http://mba.tuck.dartmouth.edu/pages/faculty/espen.eckbo/PDFs/
Handbookpdf/CH11-VentureCapital.pdf.
80. An account of how “new economy” technologies are making important transformations to “old economy” industries: Jonathan Rauch, “The New Old Economy: Oil, Computers, and the Reinvention of the Earth,” Atlantic Monthly, January 3, 2001.
81. U.S. Department of Commerce, Bureau of Economic Analysis (http://www.bea.doc.gov), use the following site and select Table 1.1.6: http://www.bea.doc.gov/bea/dn/nipaweb/SelectTable.asp?Selected=N.
82. U.S. Department of Commerce, Bureau of Economic Analysis, http://www.bea.doc.gov. Data for 1920–1999: Population Estimates Program, Population Division, U.S. Census Bureau, “Historical National Population Estimates: July 1, 1900 to July 1, 1999,” http://www.census.gov/popest/archives/1990s/popclockest.txt; data for 2000–2004: http://www.census.gov/popest/states/tables/NST-EST2004-01.pdf.
83. “The Global Economy: From Recovery to Expansion,” Results from Global Economic Prospects 2005: Trade, Regionalism and Prosperity (World Bank, 2004), http://globaloutlook.worldbank.org/globaloutlook/
outside/globalgrowth.aspx; “World Bank: 2004 Economic Growth Lifts Millions from Poverty,” Voice of America News, http://www.voanews.com/english/2004-11-17-voa41.cfm.
84. Mark Bils and Peter Klenow, “The Acceleration in Variety Growth,” American Economic Review 91.2 (May 2001): 274–80, http://www.klenow.com/Acceleration.pdf.
85. See notes 84, 86, and 87.
86. U.S. Department of Labor, Bureau of Labor Statistics, news report, June 3, 2004. You can generate productivity reports at http://www.bls.gov/bls/productivity.htm.
87. Bureau of Labor Statistics, Major Sector Multifactor Productivity Index, Manufacturing Sector: Output per Hour All Persons (1996 = 100), http://data.bls.gov/PDQ/outside.jsp?survey=mp (Requires JavaScript: select “Manufacturing,” “Output Per Hour All Persons,” and starting year 1949), or http://data.bls.gov/cgi-bin/srgate (use series “MPU300001,”“All Years,” and Format 2).
88. George M. Scalise, Semiconductor Industry Association, in “Luncheon Address: The Industry Perspective on Semiconductors,” 2004 Productivity and Cyclicality in Semiconductors: Trends, I
mplications, and Questions — Report of a Symposium (2004) (National Academies Press, 2004), p. 40, http://www.nap.edu/openbook/0309092744/html/index.html.
89. Data from Kurzweil Applied Intelligence, now part of ScanSoft (formerly Kurzweil Computer Products).
90. eMarketer, “E-Business in 2003: How the Internet Is Transforming Companies, Industries, and the Economy—a Review in Numbers,” February 2003; “US B2C E-Commerce to Top $90 Billion in 2003,” April 30, 2003, http://www.emarketer.com/Article.aspx?1002207; and “Worldwide B2B E-Commerce to Surpass $1 Trillion By Year’s End,” March 19, 2003, http://www.emarketer.com/Article.aspx?1002125.
91. The patents used in this chart are, as described by the U.S. Patent and Trademark Office, “patents for inventions,” also known as “utility” patents. The U.S. Patent and Trademark Office, Table of Annual U.S. Patent Activity, http://www.uspto.gov/web/offices/ac/ido/oeip/taf/h_counts.htm.
92. The doubling time for IT’s share of the economy is twenty-three years. U.S. Department of Commerce, Economics and Statistics Administration, “The Emerging Digital Economy,” figure 2, http://www.technology.gov/digeconomy/emerging.htm.
93. The doubling time for U.S. education expenditures per capita is twenty-three years. National Center for Education Statistics, Digest of Education Statistics, 2002, http://nces.ed.gov/pubs2003/digest02/tables/dt030.asp.
94. The United Nations estimated that the total global equity market capitalization in 2000 was thirty-seven trillion dollars. United Nations, “Global Finance Profile,” Report of the High-Level Panel of Financing for Development, June 2001, http://www.un.org/reports/financing/profile.htm.
If our perception of future growth rates were to increase (compared to current expectations) by an annual compounded rate of as little as 2 percent, and considering an annual discount rate (for discounting future values today) of 6 percent, then considering the increased present value resulting from only twenty years of compounded and discounted future (additional) growth, present values should triple. As the subsequent dialogue points out, this analysis does not take into consideration the likely increase in the discount rate that would result from such a perception of increased future growth.