It would take biologists decades to fully decipher the mechanism that lay behind these effects, but the spectrum of damaged tissues—skin, lips, blood, gums, and nails—already provided an important clue: radium was attacking DNA. DNA is an inert molecule, exquisitely resistant to most chemical reactions, for its job is to maintain the stability of genetic information. But X-rays can shatter strands of DNA or generate toxic chemicals that corrode DNA. Cells respond to this damage by dying or, more often, by ceasing to divide. X-rays thus preferentially kill the most rapidly proliferating cells in the body, cells in the skin, nails, gums, and blood.
This ability of X-rays to selectively kill rapidly dividing cells did not go unnoticed—especially by cancer researchers. In 1896, barely a year after Röntgen had discovered his X-rays, a twenty-one-year-old Chicago medical student, Emil Grubbe, had the inspired notion of using X-rays to treat cancer. Flamboyant, adventurous, and fiercely inventive, Grubbe had worked in a factory in Chicago that produced vacuum X-ray tubes, and he had built a crude version of a tube for his own experiments. Having encountered X-ray-exposed factory workers with peeling skin and nails—his own hands had also become chapped and swollen from repeated exposures—Grubbe quickly extended the logic of this cell death to tumors.
On March 29, 1896, in a tube factory on Halsted Street (the name bears no connection to Halsted the surgeon) in Chicago, Grubbe began to bombard Rose Lee, an elderly woman with breast cancer, with radiation using an improvised X-ray tube. Lee’s cancer had relapsed after a mastectomy, and the tumor had exploded into a painful mass in her breast. She had been referred to Grubbe as a last-ditch measure, more to satisfy his experimental curiosity than to provide any clinical benefit. Grubbe looked through the factory for something to cover the rest of the breast, and finding no sheet of metal, wrapped Lee’s chest in some tinfoil that he found in the bottom of a Chinese tea box. He irradiated her cancer every night for eighteen consecutive days. The treatment was painful—but somewhat successful. The tumor in Lee’s breast ulcerated, tightened, and shrank, producing the first documented local response in the history of X-ray therapy. A few months after the initial treatment, though, Lee became dizzy and nauseated. The cancer had metastasized to her spine, brain, and liver, and she died shortly after. Grubbe had stumbled on another important observation: X-rays could only be used to treat cancer locally, with little effect on tumors that had already metastasized.*
Inspired by the response, even if it had been temporary, Grubbe began using X-ray therapy to treat scores of other patients with local tumors. A new branch of cancer medicine, radiation oncology, was born, with X-ray clinics mushrooming up in Europe and America. By the early 1900s, less than a decade after Röntgen’s discovery, doctors waxed ecstatic about the possibility of curing cancer with radiation. “I believe this treatment is an absolute cure for all forms of cancer,” a Chicago physician noted in 1901. “I do not know what its limitations are.”
With the Curies’ discovery of radium in 1902, surgeons could beam thousandfold more powerful bursts of energy on tumors. Conferences and societies on high-dose radiation therapy were organized in a flurry of excitement. Radium was infused into gold wires and stitched directly into tumors, to produce even higher local doses of X-rays. Surgeons implanted radon pellets into abdominal tumors. By the 1930s and ’40s, America had a national surplus of radium, so much so that it was being advertised for sale to laypeople in the back pages of journals. Vacuum-tube technology advanced in parallel; by the mid-1950s variants of these tubes could deliver blisteringly high doses of X-ray energy into cancerous tissues.
Radiation therapy catapulted cancer medicine into its atomic age—an age replete with both promise and peril. Certainly, the vocabulary, the images, and the metaphors bore the potent symbolism of atomic power unleashed on cancer. There were “cyclotrons” and “supervoltage rays” and “linear accelerators” and “neutron beams.” One man was asked to think of his X-ray therapy as “millions of tiny bullets of energy.” Another account of a radiation treatment is imbued with the thrill and horror of a space journey: “The patient is put on a stretcher that is placed in the oxygen chamber. As a team of six doctors, nurses, and technicians hover at chamber-side, the radiologist maneuvers a betatron into position. After slamming shut a hatch at the end of the chamber, technicians force oxygen in. After fifteen minutes under full pressure . . . the radiologist turns on the betatron and shoots radiation at the tumor. Following treatment, the patient is decompressed in deep-sea-diver fashion and taken to the recovery room.”
Stuffed into chambers, herded in and out of hatches, hovered upon, monitored through closed-circuit television, pressurized, oxygenated, decompressed, and sent back to a room to recover, patients weathered the onslaught of radiation therapy as if it were an invisible benediction.
And for certain forms of cancer, it was a benediction. Like surgery, radiation was remarkably effective at obliterating locally confined cancers. Breast tumors were pulverized with X-rays. Lymphoma lumps melted away. One woman with a brain tumor woke up from her yearlong coma to watch a basketball game in her hospital room.
But like surgery, radiation medicine also struggled against its inherent limits. Emil Grubbe had already encountered the first of these limits with his earliest experimental treatments: since X-rays could only be directed locally, radiation was of limited use for cancers that had metastasized.* One could double and quadruple the doses of radiant energy, but this did not translate into more cures. Instead, indiscriminate irradiation left patients scarred, blinded, and scalded by doses that had far exceeded tolerability.
The second limit was far more insidious: radiation produced cancers. The very effect of X-rays killing rapidly dividing cells—DNA damage—also created cancer-causing mutations in genes. In the 1910s, soon after the Curies had discovered radium, a New Jersey corporation called U.S. Radium began to mix radium with paint to create a product called Undark—radium-infused paint that emitted a greenish white light at night. Although aware of the many injurious effects of radium, U.S. Radium promoted Undark for clock dials, boasting of glow-in-the-dark watches. Watch painting was a precise and artisanal craft, and young women with nimble, steady hands were commonly employed. These women were encouraged to use the paint without precautions, and to frequently lick the brushes with their tongues to produce sharp lettering on watches.
Radium workers soon began to complain of jaw pain, fatigue, and skin and tooth problems. In the late 1920s, medical investigations revealed that the bones in their jaws had necrosed, their tongues had been scarred by irradiation, and many had become chronically anemic (a sign of severe bone marrow damage). Some women, tested with radioactivity counters, were found to be glowing with radioactivity. Over the next decades, dozens of radium-induced tumors sprouted in these radium-exposed workers—sarcomas and leukemias, and bone, tongue, neck, and jaw tumors. In 1927, a group of five severely afflicted women in New Jersey—collectively termed “Radium girls” by the media—sued U.S. Radium. None of them had yet developed cancers; they were suffering from the more acute effects of radium toxicity—jaw, skin, and tooth necrosis. A year later, the case was settled out of court with a compensation of $10,000 each to the girls, and $600 per year to cover living and medical expenses. The “compensation” was not widely collected. Many of the Radium girls, too weak even to raise their hands to take an oath in court, died of leukemia and other cancers soon after their case was settled.
Marie Curie died of leukemia in July 1934. Emil Grubbe, who had been exposed to somewhat weaker X-rays, also succumbed to the deadly late effects of chronic radiation. By the mid-1940s, Grubbe’s fingers had been amputated one by one to remove necrotic and gangrenous bones, and his face was cut up in repeated operations to remove radiation-induced tumors and premalignant warts. In 1960, at the age of eighty-five, he died in Chicago, with multiple forms of cancer that had spread throughout his body.
The complex intersection of radiation with cancer—cancer-curing at times, canc
er-causing at others—dampened the initial enthusiasm of cancer scientists. Radiation was a powerful invisible knife—but still a knife. And a knife, no matter how deft or penetrating, could only reach so far in the battle against cancer. A more discriminating therapy was needed, especially for cancers that were nonlocalized.
In 1932, Willy Meyer, the New York surgeon who had invented the radical mastectomy contemporaneously with Halsted, was asked to address the annual meeting of the American Surgical Association. Gravely ill and bedridden, Meyer knew he would be unable to attend the meeting, but he forwarded a brief, six-paragraph speech to be presented. On May 31, six weeks after Meyer’s death, his letter was read aloud to the roomful of surgeons. There is, in that letter, an unfailing recognition that cancer medicine had reached some terminus, that a new direction was needed. “If a biological systemic after-treatment were added in every instance,” Meyer wrote, “we believe the majority of such patients would remain cured after a properly conducted radical operation.”
Meyer had grasped a deep principle about cancer. Cancer, even when it begins locally, is inevitably waiting to explode out of its confinement. By the time many patients come to their doctor, the illness has often spread beyond surgical control and spilled into the body exactly like the black bile that Galen had envisioned so vividly nearly two thousand years ago.
In fact, Galen seemed to have been right after all—in the accidental, aphoristic way that Democritus had been right about the atom or Erasmus had made a conjecture about the Big Bang centuries before the discovery of galaxies. Galen had, of course, missed the actual cause of cancer. There was no black bile clogging up the body and bubbling out into tumors in frustration. But he had uncannily captured something essential about cancer in his dreamy and visceral metaphor. Cancer was often a humoral disease. Crablike and constantly mobile, it could burrow through invisible channels from one organ to another. It was a “systemic” illness, just as Galen had once made it out to be.
* Metastatic sites of cancer can occasionally be treated with X-rays, although with limited success.
* Radiation can be used to control or palliate metastatic tumors in selected cases, but is rarely curative in these circumstances.
Dyeing and Dying
Those who have not been trained in chemistry or medicine may not realize how difficult the problem of cancer treatment really is. It is almost—not quite, but almost—as hard as finding some agent that will dissolve away the left ear, say, and leave the right ear unharmed. So slight is the difference between the cancer cell and its normal ancestor.
—William Woglom
Life is . . . a chemical incident.
—Paul Ehrlich
as a schoolboy, 1870
A systemic disease demands a systemic cure—but what kind of systemic therapy could possibly cure cancer? Could a drug, like a microscopic surgeon, perform an ultimate pharmacological mastectomy—sparing normal tissue while excising cancer cells? Willy Meyer wasn’t alone in fantasizing about such a magical therapy—generations of doctors before him had also fantasized about such a medicine. But how might a drug coursing through the whole body specifically attack a diseased organ?
Specificity refers to the ability of any medicine to discriminate between its intended target and its host. Killing a cancer cell in a test tube is not a particularly difficult task: the chemical world is packed with malevolent poisons that, even in infinitesimal quantities, can dispatch a cancer cell within minutes. The trouble lies in finding a selective poison—a drug that will kill cancer without annihilating the patient. Systemic therapy without specificity is an indiscriminate bomb. For an anticancer poison to become a useful drug, Meyer knew, it needed to be a fantastically nimble knife: sharp enough to kill cancer yet selective enough to spare the patient.
The hunt for such specific, systemic poisons for cancer was precipitated by the search for a very different sort of chemical. The story begins with colonialism and its chief loot: cotton. In the mid-1850s, as ships from India and Egypt laden with bales of cotton unloaded their goods in English ports, cloth milling boomed into a spectacularly successful business in England, an industry large enough to sustain an entire gamut of subsidiary industries. A vast network of mills sprouted up in the industrial basin of the Midlands, stretching through Glasgow, Lancashire, and Manchester. Textile exports dominated the British economy. Between 1851 and 1857, the export of printed goods from England more than quadrupled—from 6 million to 27 million pieces per year. In 1784, cotton products had represented a mere 6 percent of total British exports. By the 1850s, that proportion had peaked at 50 percent.
The cloth-milling boom set off a boom in cloth dyeing, but the two industries—cloth and color—were oddly out of technological step. Dyeing, unlike milling, was still a preindustrial occupation. Cloth dyes had to be extracted from perishable vegetable sources—rusty carmines from Turkish madder root, or deep blues from the indigo plant—using antiquated processes that required patience, expertise, and constant supervision. Printing on textiles with colored dyes (to produce the ever-popular calico prints, for instance) was even more challenging—requiring thickeners, mordants, and solvents in multiple steps—and often took the dyers weeks to complete. The textile industry thus needed professional chemists to dissolve its bleaches and cleansers, to supervise the extraction of dyes, and to find ways to fasten the dyes on cloth. A new discipline called practical chemistry, focused on synthesizing products for textile dyeing, was soon flourishing in polytechnics and institutes all over London.
In 1856, William Perkin, an eighteen-year-old student at one of these institutes, stumbled on what would soon become a Holy Grail of this industry: an inexpensive chemical dye that could be made entirely from scratch. In a makeshift one-room laboratory in his apartment in the East End of London (“half of a small but long-shaped room with a few shelves for bottles and a table”) Perkin was boiling nitric acid and benzene in smuggled glass flasks and precipitated an unexpected reaction. A chemical had formed inside the tubes with the color of pale, crushed violets. In an era obsessed with dye-making, any colored chemical was considered a potential dye—and a quick dip of a piece of cotton into the flask revealed the new chemical could color cotton. Moreover, this new chemical did not bleach or bleed. Perkin called it aniline mauve.
Perkin’s discovery was a godsend for the textile industry. Aniline mauve was cheap and imperishable—vastly easier to produce and store than vegetable dyes. As Perkin soon discovered, its parent compound could act as a molecular building block for other dyes, a chemical skeleton on which a variety of side chains could be hung to produce a vast spectrum of vivid colors. By the mid-1860s, a glut of new synthetic dyes, in shades of lilac, blue, magenta, aquamarine, red, and purple flooded the cloth factories of Europe. In 1857, Perkin, barely nineteen years old, was inducted into the Chemical Society of London as a full fellow, one of the youngest in its history to be thus honored.
Aniline mauve was discovered in England, but dye making reached its chemical zenith in Germany. In the late 1850s, Germany, a rapidly industrializing nation, had been itching to compete in the cloth markets of Europe and America. But unlike England, Germany had scarcely any access to natural dyes: by the time it had entered the scramble to capture colonies, the world had already been sliced up into so many parts, with little left to divide. German cloth millers thus threw themselves into the development of artificial dyes, hoping to rejoin an industry that they had once almost given up as a lost cause.
Dye making in England had rapidly become an intricate chemical business. In Germany—goaded by the textile industry, cosseted by national subsidies, and driven by expansive economic growth—synthetic chemistry underwent an even more colossal boom. In 1883, the German output of alizarin, the brilliant red chemical that imitated natural carmine, reached twelve thousand tons, dwarfing the amount being produced by Perkin’s factory in London. German chemists rushed to produce brighter, stronger, cheaper chemicals and muscled their way into textile factories a
ll around Europe. By the mid-1880s, Germany had emerged as the champion of the chemical arms race (which presaged a much uglier military one) to become the “dye basket” of Europe.
Initially, the German textile chemists lived entirely in the shadow of the dye industry. But emboldened by their successes, the chemists began to synthesize not just dyes and solvents, but an entire universe of new molecules: phenols, alcohols, bromides, alkaloids, alizarins, and amides, chemicals never encountered in nature. By the late 1870s, synthetic chemists in Germany had created more molecules than they knew what to do with. “Practical chemistry” had become almost a caricature of itself: an industry seeking a practical purpose for the products that it had so frantically raced to invent.
Early interactions between synthetic chemistry and medicine had largely been disappointing. Gideon Harvey, a seventeenth-century physician, had once called chemists the “most impudent, ignorant, flatulent, fleshy, and vainly boasting sort of mankind.” The mutual scorn and animosity between the two disciplines had persisted. In 1849, August Hofmann, William Perkin’s teacher at the Royal College, gloomily acknowledged the chasm between medicine and chemistry: “None of these compounds have, as yet, found their way into any of the appliances of life. We have not been able to use them . . . for curing disease.”
But even Hofmann knew that the boundary between the synthetic world and the natural world was inevitably collapsing. In 1828, a Berlin scientist named Friedrich Wöhler had sparked a metaphysical storm in science by boiling ammonium cyanate, a plain, inorganic salt, and creating urea, a chemical typically produced by the kidneys. The Wöhler experiment—seemingly trivial—had enormous implications. Urea was a “natural” chemical, while its precursor was an inorganic salt. That a chemical produced by natural organisms could be derived so easily in a flask threatened to overturn the entire conception of living organisms: for centuries, the chemistry of living organisms was thought to be imbued with some mystical property, a vital essence that could not be duplicated in a laboratory—a theory called vitalism. Wöhler’s experiment demolished vitalism. Organic and inorganic chemicals, he proved, were interchangeable. Biology was chemistry: perhaps even a human body was no different from a bag of busily reacting chemicals—a beaker with arms, legs, eyes, brain, and soul.