Page 52 of The Gene


  What, then, was the appropriate vector for gene therapy? What kind of virus could be used to deliver genes safely into humans? And which organs were appropriate targets? Just as the field of gene therapy was beginning to confront its most intriguing scientific problems, the entire discipline was placed under a strict moratorium. The litany of troubles uncovered in the OTC trial was not limited to that trial alone. In January 2000, when the FDA inspected twenty-eight other trials, nearly half of them required immediate remedial action. Justifiably alarmed, the FDA shut down nearly all the trials. “The entire field of gene therapy went into free fall,” a journalist wrote. “Wilson was banned from working on FDA-regulated human clinical trials for five years. He stepped down from his position at the helm of the Institute for Human Gene Therapy, remaining as a professor at Penn. Soon afterward the institute itself was gone. In September 1999, gene therapy looked to be on the cusp of a breakthrough in medicine. By the end of 2000, it seemed like a cautionary tale of scientific overreach.” Or, as Ruth Macklin, the bioethicist, put it bluntly, “Gene therapy is not yet therapy.”

  In science, there is a well-known aphorism that the most beautiful theory can be slayed by an ugly fact. In medicine, the same aphorism takes a somewhat different form: a beautiful therapy can be killed by an ugly trial. In retrospect, the OTC trial was nothing short of ugly—hurriedly designed, poorly planned, badly monitored, abysmally delivered. It was made twice as hideous by the financial conflicts involved; the prophets were in it for profits. But the basic concept behind the trial—delivering genes into human bodies or cells to correct genetic defects—was conceptually sound, as it had been for decades. In principle, the capacity to deliver genes into cells using viruses or other gene vectors should have led to powerful new medical technologies, had the scientific and financial ambitions of the early proponents of gene therapy not gotten in the way.

  Gene therapy would eventually become therapy. It would rebound from the ugliness of the initial trials and learn the moral lessons implicit in the “cautionary tale of scientific overreach.” But it would take yet another decade, and a lot more learning, for the science to cross the breach.

  * * *

  I. Kenneth Culver was also a crucial member of this original team.

  II. In 1980, a UCLA scientist named Martin Cline attempted the first known gene therapy in humans. A hematologist by training, Cline chose to study beta-thalassemia, a genetic disease in which the mutation of a single gene, encoding a subunit of hemoglobin, causes severe anemia. Reasoning that he might be able to run his trials in foreign countries, where the use of recombinant DNA in humans was less constrained and regulated, Cline did not notify his hospital’s review board, and ran his trials on two thalassemia patients in Israel and Italy. Cline’s attempts were discovered by the NIH and UCLA. He was sanctioned by the NIH, found to be in breach of federal regulations, and ultimately resigned as the chair of his division. The complete data from his experiment were never formally published.

  Genetic Diagnosis: “Previvors”

  All that man is,

  All mere complexities.

  —W. B. Yeats, “Byzantium”

  The anti-determinists want to say that DNA is a little side-show, but every disease that’s with us is caused by DNA. And [every disease] can be fixed by DNA.

  —George Church

  While human gene therapy was exiled to wander its scientific tundra in the late 1990s, human genetic diagnosis experienced a remarkable renaissance. To understand this renaissance, we need to return to the “future’s future” envisioned by Berg’s students on the ramparts of the Sicilian castle. As the students had imagined it, the future of human genetics would be built on two fundamental elements. The first was “genetic diagnosis”—the idea that genes could be used to predict or determine illness, identity, choice, and destiny. The second was “genetic alteration”—that genes could be changed to change the future of diseases, choice, and destiny.

  This second project—the intentional alteration of genes (“writing the genome”)—had evidently faltered with the abrupt ban on gene-therapy trials. But the first—predicting future fate from genes (“reading the genome”)—only gained more strength. In the decade following Jesse Gelsinger’s death, geneticists uncovered scores of genes linked to some of the most complex and mysterious human diseases—illnesses for which genes had never been implicated as primary causes. These discoveries would enable the development of immensely powerful new technologies that would allow for the preemptive diagnosis of illness. But they would also force genetics and medicine to confront some of the deepest medical and moral conundrums in their history. “Genetic tests,” as Eric Topol, the medical geneticist described it, “are also moral tests. When you decide to test for ‘future risk,’ you are also, inevitably, asking yourself, what kind of future am I willing to risk?”

  Three case studies illustrate the power and the peril of using genes to predict “future risk.” The first involves the breast cancer gene BRCA1. In the early 1970s, the geneticist Mary-Claire King began to study the inheritance of breast and ovarian cancer in large families. A mathematician by training, King had met Allan Wilson—the man who had dreamed up Mitochondrial Eve—at the University of California, Berkeley, and switched to the study of genes and the reconstruction of genetic lineages. (King’s earlier studies, performed in Wilson’s lab, had demonstrated that chimps and humans shared more than 90 percent genetic identity.)

  After graduate school, King turned to a different sort of genetic history: reconstructing the lineages of human diseases. Breast cancer, in particular, intrigued her. Decades of careful studies on families had suggested that breast cancer came in two forms—sporadic and familial. In sporadic breast cancer, the illness appears in women without any family history. In familial breast cancer, the cancer courses through families across multiple generations. In a typical pedigree, a woman, her sister, her daughter, and her granddaughter might be affected—although the precise age of diagnosis, and the precise stage of cancer for each individual, might differ. The increased incidence of breast cancer in some of these families is often accompanied by a striking increase in the incidence of ovarian cancer, suggesting a mutation that is common to both forms of cancer.

  In 1978, when the National Cancer Institute launched a survey on breast cancer patients, there was widespread disagreement about the cause of the disease. One camp of cancer experts argued that breast cancer was caused by a chronic viral infection, triggered by the overuse of oral contraceptives. Others blamed stress and diet. King asked to have two questions added to the survey: “Did the patient have a family history of breast cancer? Was there a family history of ovarian cancer?” By the end of the survey, the genetic connection vaulted out of the study: she had identified several families with deep histories of both breast and ovarian cancer. Between 1978 and 1988, King added hundreds of such families to her list and compiled enormous pedigrees of women with breast cancer. In one family with more than 150 members, she found 30 women affected by the illness.

  A closer analysis of all the pedigrees suggested that a single gene was responsible for many of the familial cases—but identifying the gene was not easy. Although the culprit gene increased the cancer risk among carriers by more than tenfold, not everyone who inherited the gene had cancer. The breast cancer gene, King found, had “incomplete penetrance”: even if the gene was mutated, its effect did not always fully “penetrate” into every individual to cause a symptom (i.e., breast or ovarian cancer).

  Despite the confounding effect of penetrance, King’s collection of cases was so large that she could use linkage analysis across multiple families, crossing multiple generations, to narrow the location of the gene to chromosome seventeen. By 1988, she had zoomed in farther on the gene: she had pinpointed it to a region on chromosome seventeen called 17q21. “The gene was still a hypothesis,” she said, but at least it had a known physical presence on a human chromosome. “Being comfortable with uncertainty for years was the
. . . lesson of the Wilson lab, and it is an essential part of what we do.” She called the gene BRCA1, even though she had yet to isolate it.

  The narrowing down of the chromosomal locus of BRCA1 launched a furious race to identify the gene. In the early nineties, teams of geneticists across the globe, including King, set out to clone BRCA1. New technologies, such as the polymerase chain reaction (PCR), allowed researchers to make millions of copies of a gene in a test tube. These techniques, coupled with deft gene-cloning, gene-sequencing, and gene-mapping methods, made it possible to move rapidly from a chromosomal position to a gene. In 1994, a private company in Utah named Myriad Genetics announced the isolation of the BRCA1 gene. In 1998, Myriad was granted a patent for the BRCA1 sequence—one of the first-ever patents issued on a human gene sequence.

  For Myriad, the real use of BRCA1 in clinical medicine was genetic testing. In 1996, even before the patent on the gene had been granted, the company began marketing a genetic test for BRCA1. The test was simple: A woman at risk would be evaluated by a genetic counselor. If the family history was suggestive of breast cancer, a swab of cells from her mouth would be sent to a central lab. The lab would amplify parts of her BRCA1 gene using the polymerase chain reaction, sequence the parts, and identify the mutant genes. It would report back “normal,” “mutant,” or “indeterminate” (some unusual mutations have not yet been fully categorized for breast cancer risk).

  In the summer of 2008, I met a woman with a family history of breast cancer. Jane Sterling was a thirty-seven-year-old nurse from the North Shore of Massachusetts. The story of her family could have been plucked straight out of Mary-Claire King’s case files: a great-grandmother with breast cancer at an early age; a grandmother who had had a radical mastectomy for cancer at forty-five; a mother who had had bilateral breast cancer at sixty. Sterling had two daughters. She had known about BRCA1 testing for nearly a decade. When her first daughter was born, she had considered the test, but neglected to follow up. With the birth of the second daughter, and the diagnosis of breast cancer in a close friend, she came to terms with gene testing.

  Sterling tested positive for a BRCA1 mutation. Two weeks later, she returned to the clinic armed with sheaves of papers scribbled with questions. What would she do with the knowledge of her diagnosis? Women with BRCA1 have an 80 percent lifetime risk of breast cancer. But the genetic test tells a woman nothing about when she might develop the cancer, nor the kind of cancer that she might have. Since the BRCA1 mutation has incomplete penetrance, a woman with the mutation might develop inoperable, aggressive, therapy-resistant breast cancer at age thirty. She might develop a therapy-sensitive variant at age fifty, or a smoldering, indolent variant at age seventy-five. Or she might not develop cancer at all.

  When should she tell her daughters about the diagnosis? “Some of these women [with BRCA1 mutations] hate their mothers,” one writer, who tested positive herself, wrote (the hatred of mothers, alone, illuminates the chronic misunderstanding of genetics, and its debilitating effects on the human psyche; the mutant BRCA1 gene is as likely to be inherited from a mother as it is from a father). Would Sterling inform her sisters? Her aunts? Her second cousins?

  The uncertainties about outcome were compounded by uncertainties about the choices of therapy. Sterling could choose to do nothing—to watch and wait. She could choose to have bilateral mastectomies and/or ovary removal to sharply diminish her risk of breast and ovarian cancer—“cutting off her breasts to spite her genes,” as one woman with a BRCA1 mutation described it. She could seek intensive screening with mammograms, self-examination, and MRIs to detect early breast cancer. Or she could choose to take a hormonal medicine, such as tamoxifen, which would decrease the risk of some, but not all, breast cancer.

  Part of the reason for this vast variation in outcome reflects the fundamental biology of BRCA1. The gene encodes a protein that plays a critical role in the repair of damaged DNA. For a cell, a broken DNA strand is a catastrophe in the making. It signals the loss of information—a crisis. Soon after DNA damage, the BRCA1 protein is recruited to the broken edges to repair the gap. In patients with the normal gene, the protein launches a chain reaction, recruiting dozens of proteins to the knife edge of the broken gene to swiftly plug the breach. In patients with the mutated gene, however, the mutant BRCA1 is not appropriately recruited, and the breaks are not repaired. The mutation thus permits more mutations—like fire fueling fire—until the growth-regulatory and metabolic controls on the cell are snapped, ultimately leading to breast cancer. Breast cancer, even in BRCA1-mutated patients, requires multiple triggers. The environment clearly plays a role: add X-rays, or a DNA-damaging agent, and the mutation rate climbs even higher. Chance plays a role since the mutations that accumulate are random. And other genes accelerate or mitigate the effects of BRCA1—genes involved in repair of the DNA or the recruitment of the BRCA1 protein to the broken strand.

  The BRCA1 mutation thus predicts a future, but not in the sense that a mutation in the cystic fibrosis gene or Huntington’s disease gene predicts the future. The future of a woman carrying a BRCA1 mutation is fundamentally changed by that knowledge—and yet it remains just as fundamentally uncertain. For some women, the genetic diagnosis is all-consuming; it is as if their lives and energies are spent anticipating cancer and imagining survivorship—from an illness that they have not yet developed. A disturbing new word, with a distinctly Orwellian ring, has been coined to describe these women: previvors—pre-survivors.

  The second case study of genetic diagnosis concerns schizophrenia and bipolar disorder; it brings us full circle in our story. In 1908, the Swiss German psychiatrist Eugen Bleuler introduced the term schizophrenia to describe patients with a unique mental illness characterized by a terrifying form of cognitive disintegration—the collapse of thinking. Previously called dementia praecox, “precocious madness,” schizophrenics were often young men who experienced a gradual but irreversible breakdown in their cognitive abilities. They heard spectral voices from within, commanding them to perform odd, out-of-place activities (recall Moni’s hissing inner voice that kept repeating, “Piss here; piss here”). Phantasmic visions appeared and disappeared. The capacity to organize information or perform goal-oriented tasks collapsed, and new words, fears, and anxieties emerged, as if from the netherworlds of the mind. In the end, all organized thinking began to crumble, entrapping the schizophrenic in a maze of mental rubble. Bleuler argued that the principal characteristic of the illness was a splitting, or rather splintering, of the cognitive brain. This phenomenon inspired the word schizo-phrenia, from “split brain.”

  Like many other genetic diseases, schizophrenia also comes in two forms—familial and sporadic. In some families with schizophrenia, the disorder courses through multiple generations. Occasionally, some families with schizophrenia also have family members with bipolar disorder (Moni, Jagu, Rajesh). In sporadic or de novo schizophrenia, in contrast, the illness arises as a bolt from the blue: a young man from a family with no prior history might suddenly experience the cognitive collapse, often with little or no warning. Geneticists tried to make sense of these patterns, but could not draw a model of the disorder. How could the same illness have sporadic and familial forms? And what was the link between bipolar disease and schizophrenia, two seemingly unrelated disorders of the mind?

  The first clues about the etiology of schizophrenia came from twin studies. In the 1970s, studies demonstrated a striking degree of concordance among twins. Among identical twins, the chance of the second twin having schizophrenia was 30 to 50 percent, while among fraternal twins, the chance was 10 to 20 percent. If the definition of schizophrenia was broadened to include milder social and behavioral impairments, the concordance among identical twins rose to 80 percent.

  Despite such tantalizing clues pointing to genetic causes, the idea that schizophrenia was a frustrated form of sexual anxiety gripped psychiatrists in the 1970s. Freud had famously attributed paranoid delusions to “unconscious homosexual imp
ulses,” apparently created by dominant mothers and weak fathers. In 1974, the psychiatrist Silvano Arieti attributed the illness to a “domineering, nagging and hostile mother who gives the child no chance to assert himself.” Although the evidence from actual studies suggested nothing of the sort, Arieti’s idea was so seductive—what headier mix than sexism, sexuality, and mental illness?—that it earned him scores of awards and distinctions, including the National Book Award for science.

  It took the full force of human genetics to bring sanity to the study of madness. Throughout the 1980s, fleets of twin studies strengthened the case for a genetic cause of schizophrenia. In study upon study, the concordance among identical twins exceeded that of fraternal twins so strikingly that it was impossible to deny a genetic cause. Families with well-established histories of schizophrenia and bipolar disease—such as mine—were documented across multiple generations, again demonstrating a genetic cause.