But let me return for a moment to the Voltaic battery. The abundant flow of current which it produced was so startling that it was at first doubted whether this 'electric fluid' was the same kind of thing which came in sparks out of the older contraptions. Comparison of their effects led to the realization that the discharges of static electricity from a Leyden Jar had a higher potential or tension, whereas the flow from the battery had a low potential but carried a greater quantity of current. Thus the distinction was made between the potential (voltage), roughly comparable to the gradient of a river-bed, and the quantity of liquid (amperage) that passed through it. But only fifty years later did Faraday realize that the spark from a Leyden Jar could be regarded as a short-lived current; then came Maxwell, who treated currents as moving charges, thus finally unifying the two kinds of electricity: 'frictional' and 'Voltaic'.
In the meantime, however, that other grand synthesis got underway: the unification of electricity and magnetism. There were several steps. The first link was established in 1820 by the observation of Hans Christian Oersted in Copenhagen that if an electric current flowed through a wire in the vicinity of a magnetic compass, the needle was deflected and turned into a position at right angles to the wire. The news created an immediate sensation in Paris, where Ampère's excitable brain gave off a spark bigger than any Leyden Jar: he realized in a single flash that if an electric current produced a magnetic field, as the reaction of the needle indicated, then all magnetic fields may be due to electric currents -- that magnetism was a by-product of electricity. He let a current run through a spiral coil inside of which he placed a steel needle: it became magnetized, and the first electro-magnet was born.*
But how, then, was the 'natural magnetism' of loadstones to be explained, which had no currents running around them? Ampère's answer was that minute currents were circulating in coils inside the atoms of the loadstone. These sub-atomic currents produced magnetic fields, which tended to align themselves with the magnetic field of the biggest loadstone, the earth. The theory at the same time dispensed with the necessity of explaining magnetism by the physical action of poles; it was perhaps the boldest and most surprising idea in this whole development. Unfortunately, Ampère's contemporaries were not 'ripe' for it. To quote D. L. Webster:
Scientists should have reacted to this surprise better than they did -- but scientists are human. The philosophical principle of parsimony in hypotheses should have been their guide. Instead their guide seems to have been habit. Parsimony would have dictated as follows: 1. Whatever we believe about magnets, we must recognize currents in wires as currents. 2. The pole theory of magnets requires us to believe in two types of field producers, poles and currents, whereas Ampère's theory requires only currents. 3. The pole theory requires two very different sets of laws for magnetic fields, one for fields due to poles and the other for fields due to currents, whereas Ampère's theory requires only one set of laws. 4. Therefore, we shall follow Ampère. But poles were treated as real for nearly another century. [6]
Yet Ampère's idea was never entirely forgotten. Maxwell compared Ampère's sub-atomic coils to miniature spinning-tops which always tend to preserve the direction of their axes; he tried to magnetize a piece of iron by rotating it fast. In 1913, when Niels Bohr invented his model of the atom as a miniature solar system, it was thought that the orbital motions of the electrons round the nucleus provided the Ampèrean circuits. This turned out to be part of the truth; but the principal source of magnetism was found to be, even more surprisingly, a spinning motion of the electrons round their own axes. An electron, of course, can hardly be said to have an axis since it is now regarded as something in the nature of a blur; but mathematically the model worked, and that is all one can ask for in the present state of physics. A century after Oersted, magnetism and electricity were finally reduced to a common source.
But I have been anticipating the happy end. The next stage, after Ampère had shown that an electric current will produce a magnetic field, was the discovery by Faraday (in 1831) that magnetism could be 'directly converted into electricity' by moving magnet and conducting coil relative to each other.* This led to the invention of the dynamo, and later of the electric motor; but we are concerned with theory, not with the ubiquitous applications of electric energy.
Faraday, as we know, was a visualizer, who saw the universe patterned by lines of force -- like the familiar diagrams of iron filings grouped round a magnet. James Clark Maxwell, who inaugurated the post-Newtonian age in physics, was a super-visualizer. He took Faraday's imaginary lines of force and put them into imaginary tubes carrying a fluid; then he abolished the spaces between the tubes so that they became 'mere surfaces, directing the motion of a fluid filling up all space' -- the ether. Next, he applied to this model the rules of a game which bore no relation at all to electro-magnetism -- hydro-dynamics, with its vortices and eddies and changing pressures.** One conclusion which emerged from this imaginary operation was that all changes in electric and magnetic force (for instance, those caused by an oscillating circuit) sent waves spreading through space; and that these waves had the same transverse character, and the same speed, as light. 'We can scarcely avoid the inference', he wrote in a monumental sentence, 'that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena.'
Thus after electricity and magnetism had been united, both were now united to light. Electro-magnetic radiations came to be regarded as rapid alternations of electrical and magnetic stresses in space, where each change in the electric stress gives rise to a magnetic stress, which again gives rise to an electric stress and so on. Soon the range of these radiations was shown to comprise not only the visible spectrum between the ultra-violet and the infra-red of radiant heat, but to extend to the ultra-short gamma rays of radioactivity, and to the kilometre-long waves used in radio-communication.
Perhaps the most fascinating aspect of Maxwell's genius is that as soon as he had worked out the mathematical formulation of his theory, he discarded the model by means of which he had reached it. It was as if a man, after climbing a ladder to get a free view over his surroundings, had kicked out the ladder from under him, and remained freely suspended in the air. Gone were the tubes, the vortices, the ether; all that remained were 'fields' of an abstract, non-substantial nature, and the mathematical formalism which described the propagation of real waves in an apparently non-existent medium. It was the great turning point in physical science, when the aspiration to arrive at intelligible, mechanical models was abandoned. This renunciation, born of necessity, soon hardened into dogma -- a secular version of the Commandment 'Thou shalt not make unto thee any graven image' -- of gods or atoms.*
The transition from model-making to mathematical abstraction is strikingly illustrated by the fact that Maxwell himself left it to others (to Heinrich Rudolph Herz, as it came to pass) to give empirical proof of his electro-magnetic waves. As Crowther wrote:
The General Equations of the Electro-magnetic Field were more real to him than material phenomena he could know in the laboratory. Physicists have often wondered why Maxwell made no attempt to prove experimentally the existence of electro-magnetic waves. He probably felt he was better acquainted with the waves through the medium of the General Equations, and would 'not have known them any better, perhaps not so well,' if he had met them in the laboratory. [7]
Yet even Maxwell had his blind spots. The electron as a basic, quasi-atomic unit of electricity was clearly implied in his model of ether-vortices, and in his theory of electrolysis. Yet he rejected the concept of 'particles' of electricity as Faraday before had rejected it. Thus, as already mentioned, it was left, to J. J. Thompson to take the next decisive step: the identification of the electron as an elementary unit of electricity, and at the same time an elementary particle of matter. Some fifteen years later Rutherford discovered that the atom had a positively charged nucleus; Moseley discovered that the number of electrons in an atom determined it
s place in the periodic system; and Bohr made his famous model of electrons circling round the nucleus like planets round the sun. Matter and electricity had merged into a single matrix.
We have followed, though only in the scantest outline, the successive confluences into a vast river-delta, of electricity, magnetism, light, heat, and other electro-magnetic radiations; of chemistry, biochemistry, and atomic physics. This development was, as we have seen (p. 228), accompanied by the realization that the various 'powers of nature' were merely different forms of energy. In earlier days, and well into the nineteenth century, each of these 'powers' were thought to be contained in a material subsunce, a subtle fluid or vapour or effluvium: heat in the phlogiston; organic energy in the 'vital fluid'; gravity in the ether; electricity and magnetism in their separate effluvia. The word 'energy' from the Greek energos (work) was for the first time used by Thomas Young in 1807 to designate kinetic energy only. But by that time Rumford had already shown by an ingenious experiment that mechanical energy could be converted into heat: he made a blunt boring machine, driven by horses, work against a metal cylinder underwater, and demonstrated that the heat thus produced actually brought the water to the boil. By the middle of the century it became evident that the powers of nature were convertible: mechanical motion into heat, heat into motion, motion into electricity, electricity into magnetism, and so forth. Thus one by one the various 'subtle fluids' dropped out of the game, and were replaced by equations determining the exchange rates, as it were, for the conversion of one kind of energy-currency into another. Lastly, Einstein and his successors taught us that mass and energy, particle and wave, are merely two aspects of one and the same basic process. Only in one respect have they failed so far: in their attempts to link the gravitational field and the electro-magnetic field in a single system of equations, a unified field theory.
NOTES
To p. 662. The 'dip', or magnetic inclination seems to have been discovered independently by Georg Hartmann, a German clergyman, in 1544, and by Robert Norman, a compass-maker from Wapping. Norman and Mercator also anticipated Gilbert by placing the source of magnetic attraction in the earth.
To p. 668. The experiment was actually suggested to Ampère by Arago.
To p. 670. Faraday's original formulation was indeed entirely relativistic. According to Newtonian mechanics, however, it did make a difference whether the wire was moved or the magnet. This paradoxical asymmetry was one of the pincipal considerations which led Einstein to the theory of special relativity (cf. Polànyi, 1957, pp. 10-11).
To p. 670. Vortices had already appeared in Kepler's and Descartes' explanations; and Helmholz, too, had compared the dynamics of fluids with electric currents and magnetic fields; but Maxwell's electro-hydro-dynamics were of an incomparably more refined order.
To p. 671. Maxwell himself was less dogmatic about it. 'For the sake of persons of different types of mind, scientific truth should be presented in different forms and should be regarded as equally scientific whether it appears in the robust form and vivid colouting of a physical illustration or in the tenuity and paleness of a symbolical expression.'
APPENDIX II:
SOME FEATURES OF GENIUS
1. THE SENSE OF WONDER
In one of his essays -- "The Cutting of an Agate" -- William Butler Yeats voiced one of the silliest popular fallacies of our times:
Those learned men who are a terror to children and an ignominious sight in lovers' eyes, all those butts of a traditional humour where there is something of the wisdom of peasants, are mathematicians, theologians, lawyers, men of science of various kinds.
The fallacy consists in the identification of 'men of science of various kinds' with the lowest kind: the figure of the uninspired pedant in the waxworks of popular imagination ( p. 256). One might as well identify 'the artist' with the factory-girls who put in the colour on 'hand-painted' souvenirs.
It is a fallacy of relatively recent origin. Tillyard [1] and Marjorie Nicolson [2] have shown how profoundly the Pythagorean revival had influenced Shakespeare and transformed the Elizabethan world-picture. Perhaps the greatest experience of Milton's youth was peering for the first time through a Galilean telescope:
Before [his] eyes in sudden view appear The secrets of the hoary Deep -- a dark Illimitable ocean, without bound, Without dimension . . .
And we remember John Donne's excitement caused by Kepler's discoveries:
Man hath weav'd out a net, and this net throwne Upon the Heavens, and now they are his owne . . .
The sense of wonder was shared by mystic, poet, and scientist alike; their falling apart dates only from the end of the nineteenth century. In Book One, XI, I have discussed the scientist's motivational drive, and the emotions to which it gives rise: the present appendix is meant to illustrate these general considerations by concrete examples from the lives of a few outstanding men.
Aristotle on Motivation
The mental image that one tries to form of a white-clad, sandalled member of the Pythagorean Brotherhood, living around 530 B.C. in Croton, southern Italy, is necessarily hazy. But at least we know that the Brotherhood was both a scientific academy and a monastic order; that its members led an ascetic communal life where all property was shared, thus anticipating the Essenes and the primitive Christian communities. We know that much of their time was spent in contemplation, and that initiation into the higher mysteries of mathematics, astronomy, and medicine depended upon the purification of spirit and body, which the aspirant had to achieve by abstinences and examinations of conscience. Pythagoras himself, like St. Francis, is said to have preached to animals; the whole surviving tradition indicates that his disciples, while engaged in number-lore and astronomical calculations, firmly believed that a true scientist must be a saint, and that the wish to become one was the motivation of his labours.
The Hippocratics followed a materialist philosophy; yet that wonderfully precise ethical commandment, the Hippocratic Oath, prescribed not only that the physician should do everything in his powers to help the sick, but also that he should refrain, in the patient's house, 'from any act of seduction, of male or female, bond or free' -- a truly heroic act of self-denial. The motivation of Greek science in general was summed up in a passage by Aristotle, from which I have briefly quoted before (my italics):
Men were first led to study [natural] philosophy, as indeed they are today, by wonder. At first they felt wonder about the more superficial problems; afterwards they aavanced gradually by perplexing themselves over greater difficulties; e.g., the behaviour of the moon, the phenomena of the sun, and the origination of the universe. Now he who is perplexed and wonders believes himself to be ignorant. Hence even the lover of myths is, in a sense, a philosopher, for a myth is a tissue of wonders. Thus if they took to philosophy to escape ignorance, it is patent that they were pursuing science for the sake of knowledge itself, and not for utilitarian applications. This is confirmed by the course of historical development itself. For nearly all the requisites both of comfort and social refinement had been secured before the quest for this form of enlightenment began. So it is clear that we do not seek it for the sake of any ulterior application. Just as we call a man free who exists for his own ends and not for those of another, so it is with this which is the only free man's science: it alone of the sciences exists for its own sake. [3]
It is musing to note Aristotle's belief that applied science and technology had completed their task long before his time -- as the italicized lines and other passages in his writings clearly indicate. His statement is somehow biassed, because it does not take into account the utilitarian element in the origin of geometry: land-surveying, and of astronomy: calendar-making. Nevertheless, his summing up of the motives which drove the Greek men of science seems to be by and large true. Thus Archimedes, the greatest of them, was compelled by necessity to invent a whole series of spectacular mechanical devices -- including the water screw, and some engines of war which brought him all the fame and glory an invento
r can dream of. Yet such was his contempt for these practical inventions that he refused to leave a written record of them. His passions were mathematics and pure science; his famous words, 'give me but a firm spot on which to stand and I will move the earth' reflect a metaphysical fantasy, not an engineer's ambitions. When Syracuse fell in 212 B.C. to the Roman general Marcellus, the sage, in the midst of the turmoil and massacre, was calmly drawing geometrical figures in the sand; according to tradition, his last words were -- after being run through the body by a Roman soldier: 'Pray, do not disturb my circles'. Apocryphal or not, that tradition symbolizes the Greek attitude to science as a quest transcending the mortal self.