The Lying Stones of Marrakech
Haldane’s argument can be easily outlined. He summarized the data, including death tolls and casualty rates, from gas attacks in World War I and proclaimed the results more humane than the consequences of conventional weaponry.
A case can be made out for gas as a weapon on humanitarian grounds, based on the very small proportion of killed to casualties from gas in the War, and especially during its last year [when better gas masks had been made and widely distributed].
Haldane based this conclusion on two arguments. He first listed the chemical agents used in the war and branded most of them as not dangerous for having only transient effects (making the assumption that temporarily insensate soldiers would be passed by or humanely captured rather than slaughtered). He regarded the few chemicals that could induce more permanent harm—mustard gas, in particular—as both hard to control and relatively easy to avoid, with proper equipment. Second, he called upon his own frequent experience with poison gases and stated a strong preference for these agents over his equally personal contact with bullets:
Besides being wounded, I have been buried alive, and on several occasions in peacetime I have been asphyxiated to the point of unconsciousness. The pain and discomfort arising from the other experiences were utterly negligent compared with those produced by a good septic shell wound.
Haldane therefore concluded that gas, for reasons of effectiveness as a weapon and relative humaneness in causing few deaths compared with the number of temporary incapacitations, should be validated and further developed as a primary military tactic:
I certainly share their [pacifists’] objection to war, but I doubt whether by objecting to it we are likely to avoid it in future, however lofty our motives or disinterested our conduct.… If we are to have more wars, I prefer that my country should be on the winning side.… If it is right for me to fight my enemy with a sword, it is right for me to fight him with mustard gas; if the one is wrong, so is the other.
I do not flinch before this last statement from the realm of ultimate realpolitik. The primary and obvious objection to Haldane’s thesis in Callinicus—not only as raised now by me in the abstract, but also as advanced by Haldane’s numerous critics in 1925—holds that, whatever the impact of poison gas in its infancy in World War I (and I do not challenge Haldane’s assessment), unrestrained use of this technology may lead to levels of effectiveness and numbers of deaths undreamed of in earlier warfare. Better the devil we know best than a devil seen only as an ineffective baby just introduced into our midst. If we can squelch this baby now, by moral restraint and international agreement, let’s do so before he grows into a large and unstoppable adult potentially far more potent than any devil we know already.
World War I paraphemalia for protection from poison gas attacks.
(I should offer the proviso that, in making this general argument for moral restraint, I am speaking only of evident devils, or destructive technologies with no primary role in realms usually designated as human betterment: healing the sick, increasing agricultural yields, and so on. I am not talking about the more difficult, and common, problem of new technologies—cloning comes to mind as the current topic of greatest interest [see chapter 19]—with powerfully benevolent intended purposes but also some pretty scary potential misuses in the wrong hands, or in the decent hands of people who have not pondered the unintended consequences of good deeds. Such technologies may be regulated, but surely should not be banned.)
Haldane’s response to this obvious objection reflects all the arrogance described in the first part of this essay: I have superior scientific knowledge of this subject and can therefore be trusted to forecast future potentials and dangers; from what I know of chemistry, and from what I have learned from the data of World War I, chemical weapons will remain both effective and relatively humane and should therefore be further developed. In other words, and in epitome: trust me.
One of the grounds given for objection to science is that science is responsible for such horrors as those of the late War. “You scientific men (we are told) never think of the possible applications of your discoveries. You do not mind whether they are used to kill or to cure. Your method of thinking, doubtless satisfactory when dealing with molecules and atoms, renders you insensible to the difference between right and wrong.” … The objection to scientific weapons such as the gases of the late War, and such new devices as may be employed in the next, is essentially an objection to the unknown. Fighting with lances or guns, one can calculate, or thinks one can calculate, one’s chances. But with gas or rays or microbes one has an altogether different state of affairs.
… What I have said about mustard gas might be applied, mutatis mutandis, to most other applications of science to human life. They can all, I think, be abused, but none perhaps is always evil; and many, like mustard gas, when we have got over our first not very rational objections to them, turn out to be, on the whole, good.
In fact, Haldane didn’t even grant moral arguments—or the imposition of moral restraints—any role at all in the prevention of war. He adopted the same parochial and arrogant position, still all too common among scientists, that war can be ended only by rational and scientific research: “War will be prevented only by a scientific study of its causes, such as has prevented most epidemic diseases.”
I am no philosopher, and I do not wish to combat Haldane’s argument on theoretical grounds here. Let us look instead at the basic empirical evidence, unwittingly presented by Haldane himself in Callinicus. I therefore propose the following test: if Haldane’s argument should prevail, and scientific recommendations should be trusted because scientists can forecast the future in areas of their expertise, then the success of Haldane’s own predictions will validate his approach.
I propose that two great impediments generally stand in the way of successful prediction: first, our inability, in principle, to know much about complex futures along the contingent and nondeterministic pathways of history; and second, the personal hubris that leads us to think we act in a purely and abstractly rational manner, when our views really arise from unrecognized social and personal prejudices.
Callinicus contains an outstanding example of each error, and I rest my case for moral restraint here. Haldane does consider the argument that further development of chemical and biological weapons might prompt an investigation into even more powerful technologies of destruction—in particular, to unleashing the forces of the atom. But he dismisses this argument on scientific grounds of impossible achievement:
Of course, if we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.… [But] we cannot utilize subatomic phenomena.… We cannot make apparatus small enough to disintegrate or fuse atomic nuclei.… We can only bombard them with particles of which perhaps one in a million hit, which is like firing keys at a safe-door from a machine gun a mile away in an attempt to open it.… We know very little about the structure of the atom and almost nothing about how to modify it. And the prospect of constructing such an apparatus seems to me to be so remote that, when some successor of mine is lecturing to a party spending a holiday on the moon, it will still be an unsolved (though not, I think, an ultimately unsolvable) problem.
To which, we need only reply: Hiroshima, 1945; Mr. Armstrong on the Moon, 1969. And we are still here—in an admittedly precarious atomic world—thanks to moral and political restraint.
But the even greater danger of arrogant and “rational” predictions unwittingly based on unrecognized prejudice led Haldane to the silliest statement he ever made—one that might be deemed socially vicious if our laughter did not induce a more generous mood. Haldane tries to forecast the revised style of warfare that mustard gas must impose upon future conflicts. He claims that some people have a natural immunity, differently distributed among our racial groups. He holds that 20 percent of whites
, but 80 percent of blacks, are unaffected by the gas. Haldane then constructs a truly dotty scenario for future gas warfare: vanguards of black troops will lead the attack; German forces, with less access to this aspect of human diversity, might suffer some disadvantage, but their superior chemical knowledge should see them through, and balances should therefore be maintained:
It seems, then, that mustard gas would enable an army to gain ground with far less killed on either side than the methods used in the late War, and would tend to establish a war of movement leading to a fairly rapid decision, as in the campaigns of the past. It would not upset the present balance of power, Germany’s chemical industry being counterposed by French negro troops. Indians [that is, East Indians available to British forces] may be expected to be nearly as immune as negroes.
But now Haldane sees a hole in his argument. He steps back, breathes deeply, and finds a solution. Thank God for that 20 percent immunity among whites!
The American Army authorities made a systematic examination of the susceptibility of large numbers of recruits. They found that there was a very resistant class, comprising 20% of the white men tried, but no less than 80% of the negroes. This is intelligible, as the symptoms of mustard gas blistering and sunburn are very similar, and negroes are pretty well immune to sunburn. It looks, therefore, as if, after a slight preliminary test, it should be possible to obtain colored troops who would all be resistant to mustard gas blistering in concentrations harmful to most white men. Enough resistant whites are available to officer them.
I am simply astonished (and also bemused) that this brilliant man, who preached the equality of humankind in numerous writings spanning more than fifty years, could have been so mired in conventional racial prejudices, and so wedded to the consequential and standard military practices of European and American armies, that he couldn’t expand his horizons far enough even to imagine the possibility of competent black officers—and therefore had to sigh in relief at the availability of a few good men among the rarely resistant whites. If Haldane couldn’t anticipate even this minor development in human relationships and potentialities, why should we trust his judgments about the far more problematical nature of future wars?
(This incident should carry the same message for current discussions about underrepresentation of minorities as managers of baseball teams or as quarterbacks in football. I also recall a famous and similar episode of ridiculously poor prediction in the history of biological determinism—the estimate by a major European car manufacturer, early in the century, that his business would be profitable but rather limited. European markets, he confidently predicted, would never require more than a million automobiles—for only so many men in the lower classes possessed sufficient innate intellectual ability to work as chauffeurs! Don’t you love the triply unacknowledged bias of this statement—that poor folks rarely rank high in fixed genetic intelligence and that neither women nor rich folks could ever be expected to drive a car?)
The logic of my general argument must lead to a truly modest proposal. Wouldn’t we all love to fix the world in one fell swoop of proactive genius? We must, of course, never stop dreaming and trying. But we must also temper our projects with a modesty born of understanding that we cannot predict the future and that the best-laid plans of mice and men often founder into a deep pit dug by unanticipated consequences. In this context, we should honor what might be called the “negative morality” of restraint and consideration, a principle that wise people have always understood (as embodied in the golden rule) and dreamers have generally rejected, sometimes for human good but more often for the evil that arises when demagogues and zealots try to impose their “true belief” upon all humanity, whatever the consequences.
The Hippocratic oath, often misunderstood as a great document about general moral principles in medicine, should be read as a manifesto for protecting the secret knowledge of a guild and for passing skills only to designated initiates. But the oath also includes a preeminent statement, later recast as a Latin motto for physicians, and ranking (in my judgment) with the Socratic dictum “know thyself” as one of the two greatest tidbits of advice from antiquity. I can imagine no nobler rule of morality than this single phrase, which every human being should engrave into heart and mind: primum non nocere—above all, do no harm.
VI
Evolution
at
All Scales
21
Of Embryos
and Ancestors
“EVERY DAY, IN EVERY WAY, I’M GETTING BETTER and better.” I had always regarded this famous phrase as a primary example of the intellectual vacuity that often passes for profundity in our current era of laid-back, New Age bliss—a verbal counterpart to the vapidity of the “have a nice day” smiley face. But when I saw this phrase chiseled in stone on the pediment of a French hospital built in the early years of our century, I knew that I must have missed a longer and more interesting pedigree. This formula for well-being, I then discovered, had been devised in 1920 by Emile Coué (1857–1926), a French pharmacist who made quite a stir in the pop-psych circles of his day with a theory of self-improvement through autosuggestion based on frequent repetition of this mantra—a treatment that received the name of Couéism. (In a rare example of improvement in translation, this phrase gains both a rhyme and better flow, at least to my ears, when converted to English from Coué’s French original—”tous les jours, à tous les points de vue, je vais de mieux en mieux.”)
I don’t doubt the efficacy of Coué’s mantra, for the “placebo effect” (its only possible mode of action) should not be dismissed as a delusion, but cherished as a useful strategy for certain forms of healing—a primary example of the influence that mental attitudes can wield upon our physical sense of well-being. However, as a general description for the usual style and pacing of human improvement, the constant and steady incrementalism of Coué’s motto—a twentieth-century version of an ancient claim embodied in the victory cry of Aesop’s tortoise, “slow and steady wins the race”—strikes me as only rarely applicable, and surely secondary to the usual mode of human enlightenment, either attitudinal or intellectual: that is, not by global creep forward, inch by subsequent inch, but rather in rushes or whooshes, usually following the removal of some impediment, or the discovery of some facilitating device, either ideological or technological.
The glory of science lies in such innovatory bursts. Centuries of vain speculation dissolved in months before the resolving power of Galileo’s telescope, trained upon the full range of cosmic distances, from the moon to the Milky Way (see chapter 2). About 350 years later, centuries of conjecture and indirect data about the composition of lunar rocks melted before a few pounds of actual samples, brought back by Apollo after Mr. Armstrong’s small step onto a new world.
In the physical sciences, such explosions of discovery usually follow the invention of a device that can, for the first time, penetrate a previously invisible realm—the “too far” by the telescope, the “too small” by the microscope, the imperceptible by X-rays, or the unreachable by spaceships. In the humbler world of natural history, episodes of equal pith and moment often follow a eureka triggered by continually available mental, rather than expensively novel physical, equipment. In other words, great discovery often requires a map to a hidden mine filled with gems then easily gathered by conventional tools, not a shiny new space-age machine for penetrating previously inaccessible worlds.
The uncovering of life’s early history has featured several such cascades of discovery following a key insight about proper places to look—and I introduce this year’s wonderful story by citing a previous incident of remarkably similar character from the last generation of our science (literally so, for this year’s discoverer wrote his Ph.D. dissertation under the guidance of the first innovator).
When, as a boy in the early 1950s, I first became fascinated with paleontology and evolution, the standard dogma about the origin of life proclaimed such an event inherently improbable, but
achieved on this planet only because the immensity of geological time must convert the nearly impossible into the virtually certain. (With no limit on the number of tries, you will eventually flip fifty heads in a row with an honest coin.) As evidence for asserting the exquisite specialness of life in the face of overwhelmingly contrary odds, these conventional sources cited the absence of any fossils representing the first half of the earth’s existence—a span of more than two billion years, often formally designated on older geological charts as the Azoic (literally “lifeless”) era. Although scientists do recognize the limitations of such “negative evidence” (the first example of a previously absent phenomenon may, after all, turn up tomorrow), this failure to find any fossils for geology’s first two billion years did seem fairly persuasive. Paleontologists had been searching assiduously for more than a century and had found nothing but ambiguous scraps and blobs. Negative results based on such sustained effort over so many years do begin to inspire belief.
But the impasse broke in the 1950s, when Elso Barghoorn and Stanley Tyler reported fossils of unicellular life in rocks more than two billion years old. Paleontologists, to summarize a long and complex story with many exciting turns and notable heroes, had been looking in the wrong place—in conventional sediments that rarely preserve the remains of single-celled bacterial organisms without hard parts. We had not realized that life had remained so simple for so long, or that the ordinary sites for good fossil records could not preserve such organisms.