Page 23 of The Innovators


  This vision coincided with a problem that was dumped in Hoff’s lap in the summer of 1969. A Japanese company named Busicom was planning a powerful new desktop calculator, and it had drawn up specifications for twelve special-purpose microchips (different ones to handle display, calculations, memory, etc.) that it wanted Intel to build. Intel agreed, and a price was set. Noyce asked Hoff to oversee the project. Soon a challenge arose. “The more I learned about this design, the more concerned I became that Intel may have undertaken more than it was prepared to deliver,” Hoff recalled. “The number of chips and their complexity was much greater than I had expected.” There was no way Intel could build them at the agreed price. Making matters worse, the growing popularity of Jack Kilby’s pocket calculator was forcing Busicom to cut its price even further.

  “Well, if there’s anything you can think of to simplify the design, why don’t you pursue it,” Noyce suggested.52

  Hoff proposed that Intel design a single logic chip that could perform almost all of the tasks that Busicom wanted. “I know this can be done,” he said of the general-purpose chip. “It can be made to emulate a computer.” Noyce told him to try it.

  Before they could sell the idea to Busicom, Noyce realized he had to convince someone who might be even more resistant: Andy Grove, who nominally worked for him. Part of what Grove saw as his mandate was keeping Intel focused. Noyce would say yes to almost anything; Grove’s job was to say no. When Noyce sauntered over to Grove’s workspace and sat on the corner of his desk, Grove was immediately on guard. He knew that Noyce’s effort to appear nonchalant was a sign that something was afoot. “We’re starting another project,” Noyce said, affecting a laugh.53 Grove’s first reaction was to tell Noyce he was crazy. Intel was a fledgling company still struggling to manufacture its memory chips, and it didn’t need any distractions. But after he heard Noyce describe Hoff’s idea, Grove realized that resistance was probably wrong and definitely futile.

  By September 1969 Hoff and his colleague Stan Mazor had sketched out the architecture of a general-purpose logic chip that could follow programming instructions. It would be able to do the work of nine of the twelve chips that Busicom had requested. Noyce and Hoff presented the option to Busicom executives, who agreed that it was the better approach.

  When it came time to renegotiate the price, Hoff made a critical recommendation to Noyce, one that helped create a huge market for general-purpose chips and assured that Intel would remain a driver of the digital age. It was a deal point that Bill Gates and Microsoft would emulate with IBM a decade later. In return for giving Busicom a good price, Noyce insisted that Intel retain the rights to the new chip and be allowed to license it to other companies for purposes other than making a calculator. He realized that a chip that could be programmed to perform any logical function would become a standard component in electronic devices, the way two-by-four pieces of lumber were a standard component in the construction of houses. It would replace custom chips, which meant it could be manufactured in bulk and thus continually decline in price. It would also usher in a more subtle shift in the electronics industry: the importance of hardware engineers, who designed the placement of the components on a circuit board, began to be supplanted by a new breed, software engineers, whose job it was to program a set of instructions into the system.

  Because it was essentially a computer processor on a chip, the new device was dubbed a microprocessor. In November 1971 Intel unveiled the product, the Intel 4004, to the public. It took out ads in trade magazines announcing “a new era of integrated electronics—a micro-programmable computer on a chip!” It was priced at $200, and orders, as well as thousands of requests for the manual, began pouring in. Noyce was attending a computer show in Las Vegas on the day of the announcement and was thrilled to watch potential customers cramming into the Intel suite.

  Noyce became an apostle of the microprocessor. At a reunion in San Francisco he hosted for his extended family in 1972, he stood up in the bus he had chartered and waved a wafer over his head. “This is going to change the world,” he told them. “It’s going to revolutionize your home. In your own house, you’ll all have computers. You will have access to all sorts of information.” His relatives passed the wafer around the bus like an object of veneration. “You won’t need money anymore,” he prophesied. “Everything will happen electronically.”54

  He was exaggerating only slightly. Microprocessors began showing up in smart traffic lights and car brakes, coffeemakers and refrigerators, elevators and medical devices, and thousands of other gizmos. But the foremost success of the microprocessor was making possible smaller computers, most notably personal computers that you could have on your desk and in your home. And if Moore’s Law continued to hold true (as it would), a personal computer industry would grow up symbiotically with a microprocessor industry.

  That is what happened in the 1970s. The microprocessor spawned hundreds of new companies making hardware and software for personal computers. Intel not only developed the leading-edge chips; it also created the culture that inspired venture-funded startups to transform the economy and uproot the apricot orchards of Santa Clara Valley, the forty-mile stretch of flat land from south San Francisco through Palo Alto to San Jose.

  The valley’s main artery, a bustling highway named El Camino Real, was once the royal road that connected California’s twenty-one mission churches. By the early 1970s—thanks to Hewlett-Packard, Fred Terman’s Stanford Industrial Park, William Shockley, Fairchild and its Fairchildren—it connected a bustling corridor of tech companies. In 1971 the region got a new moniker. Don Hoefler, a columnist for the weekly trade paper Electronic News, began writing a series of columns entitled “Silicon Valley USA,” and the name stuck.55

  * * *

  I. Only living people can be selected for a Nobel.

  II. The vehicle he used was convertible debentures, which were loans that could be converted into common stock if the company became successful but were worthless (at the end of the line of creditors) if it failed.

  III. Edward “Ned” Johnson III, then running the Fidelity Magellan Fund. In 2013 Rock still had these two sheets, along with the older one seeking the patron for what became Fairchild, tucked in a filing cabinet in his office overlooking San Francisco Bay.

  IV. After she married Noyce she had to leave Intel, and she moved to the fledgling Apple Computer, where she became Steve Jobs’s first director of human resources and also a calming maternal influence on him.

  Dan Edwards and Peter Samson in 1962 playing Spacewar at MIT.

  Nolan Bushnell (1943– ).

  CHAPTER SIX

  * * *

  VIDEO GAMES

  The evolution of microchips led to devices that were, as Moore’s Law forecast, smaller and more powerful each year. But there was another impetus that would drive the computer revolution and, eventually, the demand for personal computers: the belief that computers weren’t merely for number-crunching. They could and should be fun for people to use.

  Two cultures contributed to the idea that computers should be things that we interact and play with. There were the hard-core hackers who believed in “the hands-on imperative” and loved pranks, clever programming tricks, toys, and games.1 And there were the rebel entrepreneurs eager to break into the amusement games industry, which was dominated by syndicates of pinball distributors and ripe for a digital disruption. Thus was born the video game, which turned out to be not merely an amusing sideshow but an integral part of the lineage that led to today’s personal computer. It also helped to propagate the idea that computers should interact with people in real time, have intuitive interfaces, and feature delightful graphic displays.

  STEVE RUSSELL AND SPACEWAR

  The hacker subculture, as well as the seminal video game Spacewar, emanated from MIT’s Tech Model Railroad Club, a geeky student organization founded in 1946 that met in the bowels of a building where radar had been developed. Its bunker was almost completely filled by
a model train board with dozens of tracks, switches, trolleys, lights, and towns, all compulsively crafted and historically accurate. Most of its members obsessed over fashioning picture-perfect pieces to display on the layout. But there was a subset of the club that was more interested in what was underneath the sprawling chest-high board. The members of the “Signals and Power Subcommittee” tended to the relays, wires, circuits, and crossbar switches, which were rigged together on the underside of the board to provide a complex hierarchy of controllers for the numerous trains. In this tangled web they saw beauty. “There were neat regimental lines of switches, and achingly regular rows of dull bronze relays, and a long, rambling tangle of red, blue, and yellow wires—twisting and twirling like a rainbow-colored explosion of Einstein’s hair,” Steven Levy wrote in Hackers, which begins with a colorful depiction of the club.2

  Members of the Signals and Power Subcommittee embraced the term hacker with pride. It connoted both technical virtuosity and playfulness, not (as in more recent usage) lawless intrusions into a network. The intricate pranks devised by MIT students—putting a live cow on the roof of a dorm, a plastic cow on the Great Dome of the main building, or causing a huge balloon to emerge midfield during the Harvard-Yale game—were known as hacks. “We at TMRC use the term ‘hacker’ only in its original meaning, someone who applies ingenuity to create a clever result, called a ‘hack,’ ” the club proclaimed. “The essence of a ‘hack’ is that it is done quickly, and is usually inelegant.”3

  Some of the early hackers had been infused with the aspiration of creating machines that could think. Many were students at MIT’s Artificial Intelligence Lab, founded in 1959 by two professors who would become fabled: John McCarthy, a Santa Claus lookalike who coined the term artificial intelligence, and Marvin Minsky, who was so clever that he seemed a refutation of his own belief that computers would someday surpass human intelligence. The prevailing doctrine of the lab was that, given enough processing power, machines could replicate neural networks like those of the human brain and be able to interact intelligently with users. Minsky, a puckish man with twinkling eyes, had built a learning machine designed to model the brain, which he named SNARC (Stochastic Neural Analog Reinforcement Calculator), hinting that he was serious but might also be joking a bit. He had a theory that intelligence could be a product of the interaction of nonintelligent components, such as small computers connected by giant networks.

  A seminal moment for the hackers of the Tech Model Railroad Club came in September 1961, when the Digital Equipment Corporation (DEC) donated the prototype of its PDP-1 computer to MIT. About the size of three refrigerators, the PDP-1 was the first computer to be designed for direct interaction with the user. It could connect to a keyboard and a monitor that displayed graphics, and it could be operated easily by a single person. Like moths to a flame, a handful of hard-core hackers began to circle this new computer, and they formed a cabal to conjure up something fun to do with it. Many of the discussions took place in a rundown apartment on Hingham Street in Cambridge, so the members dubbed themselves the Hingham Institute. The high-minded name was ironic. Their goal was not to come up with some elevated use for the PDP-1 but instead to do something clever.

  Previous hackers had created a few rudimentary games for earlier computers. One at MIT had a dot on a screen that represented a mouse trying to navigate a maze to find a wedge of cheese (or, in later versions, a martini); another, at the Brookhaven National Lab on Long Island, used an oscilloscope on an analog computer to simulate a tennis match. But the members of the Hingham Institute knew that with the PDP-1 they had the chance to create the first real computer video game.

  * * *

  The best programmer in their group was Steve Russell, who was helping Professor McCarthy create the language LISP, which was designed to facilitate artificial intelligence research. Russell was a consummate geek, brimming with passions and intellectual obsessions that ranged from steam trains to thinking machines. Short and excitable, he had thick glasses and curly hair. When he spoke, he sounded like someone had punched his fast-forward button. Although he was intense and energetic, he was prone to procrastination, earning him the nickname “Slug.”

  Like most of his hacker friends, Russell was an avid fan of bad movies and pulp science fiction. His favorite author was E. E. “Doc” Smith, a failed food engineer (an expert on the bleaching of flour, he concocted doughnut mixes) who specialized in a trashy sci-fi subgenre known as space opera. It featured melodramatic adventures filled with battles against evil, interstellar travel, and clichéd romance. Doc Smith “wrote with the grace and refinement of a pneumatic drill,” according to Martin Graetz, a member of the Tech Model Railroad Club and the Hingham Institute, who wrote a reminiscence about the creation of Spacewar. Graetz recalled a typical Doc Smith tale:

  After some preliminary foofaraw to get everyone’s name right, a bunch of overdeveloped Hardy Boys go trekking off through the universe to punch out the latest gang of galactic goons, blow up a few planets, kill all sorts of nasty life forms, and just have a heck of a good time. In a pinch, which is where they usually were, our heroes could be counted on to come up with a complete scientific theory, invent the technology to implement it, and produce the weapons to blow away the baddies, all while being chased in their spaceship hither and thither through the trackless wastes of the galaxy.I

  Afflicted by their passion for such space operas, it’s not surprising that Russell, Graetz, and their friends decided to concoct a space-war game for the PDP-1. “I had just finished reading Doc Smith’s Lensman series,” Russell recalled. “His heroes had a strong tendency to get pursued by the villain across the galaxy and have to invent their way out of their problem while they were being pursued. That sort of action was the thing that suggested Spacewar.”4 Proudly nerdy, they reconstituted themselves into the Hingham Institute Study Group on Space Warfare, and Slug Russell proceeded to code.5

  Except that, true to his nickname, he didn’t. He knew what the starting point of his game program would be. Professor Minsky had stumbled upon an algorithm that drew a circle on the PDP-1 and was able to modify it so that it would display three dots on the screen that interacted with each other, weaving beautiful little patterns. Minsky called his hack the Tri-Pos, but his students dubbed it “the Minskytron.” That was a good foundation for creating a game featuring interacting spaceships and missiles. Russell spent weeks mesmerized by the Minskytron and grokking its ability to make patterns. But he bogged down when it came time to write the sine-cosine routines that would determine the motion of his spaceships.

  When Russell explained this obstacle, a fellow club member named Alan Kotok knew how to solve it. He drove out to the suburban Boston headquarters of DEC, which made the PDP-1, and found a sympathetic engineer who had the routines necessary to make the calculations. “Alright, here are the sine-cosine routines,” Kotok told Russell. “Now what’s your excuse?” Russell later admitted, “I looked around and I didn’t find an excuse, so I had to settle down and do some figuring.”6

  Throughout the Christmas vacation of 1961 Russell hacked away, and within weeks he had produced a method to maneuver dots on the screen by using the toggle switches of the control panel to make them speed up, slow down, and turn. Then he converted the dots into two cartoonish spaceships, one of them fat and bulging like a cigar and the other thin and straight like a pencil. Another subroutine allowed each spaceship to shoot a dot out of its nose, mimicking a missile. When the position of the missile dot coincided with that of a spaceship, the latter would “explode” into randomly moving dots. By February 1962 the basics had been completed.

  At that point Spacewar became an open-source project. Russell put his program tape in the box that held other PDP-1 programs, and his friends began to make improvements. One of them, Dan Edwards, decided it would be cool to introduce a gravitational force, so he programmed in a big sun that exerted a tug on the ships. If you didn’t pay attention, it could suck you in and destroy yo
u, but good players learned to whip close to the sun and use its gravitational pull to gain momentum and swing around at higher speeds.

  Another friend, Peter Samson, “thought my stars were random and unrealistic,” Russell recalled.7 Samson decided the game needed “the real thing,” meaning astronomically correct constellations rather than miscellaneous dots. So he created a programming addition he called “Expensive Planetarium.” Using information from the American Ephemeris and Nautical Almanac, he encoded a routine that showed all the stars in the night sky down to the fifth magnitude. By specifying how many times a display point on the screen fired, he was even able to replicate each star’s relative brightness. As the spaceships sped along, the constellations slowly scrolled past.

  * * *

  This open-source collaboration produced many more clever contributions. Martin Graetz came up with what he called “the ultimate panic button,” which was the ability to get out of a jam by toggling a switch and disappearing temporarily into another dimension of hyperspace. “The idea was that when everything else failed you could jump into the fourth dimension and disappear,” he explained. He had read about something similar, called a “hyper-spatial tube,” in one of Doc Smith’s novels. There were, however, some limits: you could toggle into hyperspace only three times in a game; your disappearance gave your opponent a breather; and you never knew where your spaceship would reappear. It might end up in the sun or right in the sights of your opponent. “It was something you could use, but not something you wanted to use,” Russell explained. Graetz added an homage to Professor Minsky: a ship disappearing into hyperspace left behind one of the signature patterns of the Minskytron.8