The restrained use of decoded information worked perfectly. Even when the British used intercepted communications to inflict heavy losses, the Germans did not suspect that the Enigma code had been broken. They believed that their level of encryption was so high that it would be absolutely impossible to crack their codes. Instead they blamed any exceptional losses on the British secret service infiltrating their own ranks.

  Because of the secrecy surrounding the work carried out at Bletchley by Turing and his team, their immense contribution to the war effort could never be publicly acknowledged, even for many years after the war. It used to be said that the First World War was the chemists’ war and that the Second World War was the physicists’ war. In fact, from the information revealed in recent decades, it is probably true to say that the Second World War was also the mathematicians’ war – and in the case of a third world war their contribution would be even more critical.

  Throughout his code-breaking career Turing never lost sight of his mathematical goals. The hypothetical machines had been replaced with real ones, but the esoteric questions remained. By the end of the war Turing had helped build Colossus, a fully electronic machine consisting of 1500 valves, which were much faster than the electromechanical relays employed in the bombes. Colossus was a computer in the modern sense of the word, and with the extra speed and sophistication Turing began to think of it as a primitive brain – it had a memory, it could process information, and states within the computer resembled states of mind. Turing had transformed his imaginary machine into the first real computer.

  When the war ended Turing continued to build increasingly complex machines, such as the Automatic Computing Engine (ACE). In 1948 he moved to Manchester University and built the world’s first computer to have an electronically stored program. Turing had provided Britain with the most advanced computers in the world, but he would not live long enough to see their most remarkable calculations.

  In the years after the war Turing had been under surveillance from British Intelligence, who were aware that he was a practising homosexual. They were concerned that the man who knew more about Britain’s security codes than anyone else was vulnerable to blackmail and decided to monitor his every move. Turing had largely come to terms with being constantly shadowed, but in 1952 he was arrested for violation of British homosexuality statutes. This humiliation made life intolerable for Turing. Andrew Hodges, Turing’s biographer, describes the events leading up to his death:

  Alan Turing’s death came as a shock to those who knew him … That he was an unhappy, tense, person; that he was consulting a psychiatrist and suffered a blow that would have felled many people – all this was clear. But the trial was two years in the past, the hormone treatment had ended a year before, and he seemed to have risen above it all.

  The inquest, on 10 June 1954, established that it was suicide. He had been found lying neatly in his bed. There was froth round his mouth, and the pathologist who did the post-mortem easily identified the cause of death as cyanide poisoning … In the house was a jar of potassium cyanide, and also a jar of cyanide solution. By the side of his bed was half an apple, out of which several bites had been taken. They did not analyse the apple, and so it was never properly established that, as seemed perfectly obvious, the apple had been dipped in the cyanide.

  Turing’s legacy was a machine which could take an unpractically long calculation, if performed by a human, and complete it in a matter of hours. Today’s computers perform more calculations in a split second than Fermat performed in his entire career. Mathematicians who were still struggling with Fermat’s Last Theorem began to use computers to attack the problem, relying on a computerised version of Kummer’s nineteenth-century approach.

  Kummer, having discovered a flaw in the work of Cauchy and Lamé, showed that the outstanding problem in proving Fermat’s Last Theorem was disposing of the cases when n equals an irregular prime – for values of n up to 100 the only irregular primes are 37, 59 and 67. At the same time Kummer showed that in theory all irregular primes could be dealt with on an individual basis, the only problem being that each one would require an enormous amount of calculation. To make his point Kummer and his colleague Dimitri Mirimanoff put in the weeks of calculation required to dispel the three irregular primes less than 100. However, they and other mathematicians were not prepared to begin on the next batch of irregular primes between 100 and 1,000.

  A few decades later the problems of immense calculation began to vanish. With the arrival of the computer awkward cases of Fermat’s Last Theorem could be dispatched with speed, and after the Second World War teams of computer scientists and mathematicians proved Fermat’s Last Theorem for all values of n up to 500, then 1,000, and then 10,000. In the 1980s Samuel S. Wagstaff of the University of Illinois raised the limit to 25,000 and more recently mathematicians could claim that Fermat’s Last Theorem was true for all values of n up to 4 million.

  Although outsiders felt that modern technology was at last getting the better of the Last Theorem, the mathematical community were aware that their success was purely cosmetic. Even if supercomputers spent decades proving one value of n after another they could never prove every value of n up to infinity, and therefore they could never claim to prove the entire theorem. Even if the theorem was to be proved true for up to a billion, there is no reason why it should be true for a billion and one. If the theorem was to be proved up to a trillion, there is no reason why it should be true for a trillion and one, and so on ad infinitum. Infinity is unobtainable by the mere brute force of computerised number crunching.

  David Lodge in his book The Picturegoers gives a beautiful description of eternity which is also relevant to the parallel concept of infinity: ‘Think of a ball of steel as large as the world, and a fly alighting on it once every million years. When the ball of steel is rubbed away by the friction, eternity will not even have begun.’

  All that computers could offer was evidence in favour of Fermat’s Last Theorem. To the casual observer the evidence might seem to be overwhelming, but no amount of evidence is enough to satisfy mathematicians, a community of sceptics who will accept nothing other than absolute proof. Extrapolating a theory to cover an infinity of numbers based on evidence from a few numbers is a risky (and unacceptable) gamble.

  One particular sequence of primes shows that extrapolation is a dangerous crutch upon which to rely. In the seventeenth century mathematicians showed by detailed examination that the following numbers are all prime:

  31; 331; 3,331; 33,331; 333,331; 3,333,331; 33,333,331.

  The next numbers in the sequence become increasingly giant, and checking whether or not they are also prime would have taken considerable effort. At the time some mathematicians were tempted to extrapolate from the pattern so far, and assume that all numbers of this form are prime. However, the next number in the pattern, 333,333,331, turned out not to be a prime:

  Another good example which demonstrates why mathematicians refused to be persuaded by the evidence of computers is the case of Euler’s conjecture. Euler claimed that there were no solutions to an equation not dissimilar to Fermat’s equation:

  For two hundred years nobody could prove Euler’s conjecture, but on the other hand nobody could disprove it by finding a counterexample. First manual searches and then years of computer sifting failed to find a solution. Lack of a counter-example was strong evidence in favour of the conjecture. Then in 1988 Naom Elkies of Harvard University discovered the following solution:

  Despite all the evidence Euler’s conjecture turned out to be false. In fact Elkies proved that there were infinitely many solutions to the equation. The moral is that you cannot use evidence from the first million numbers to prove a conjecture about all numbers.

  But the deceptive nature of Euler’s conjecture is nothing compared to the overestimated prime conjecture. By scouring through larger and larger regimes of numbers, it becomes clear that the prime numbers become harder and harder to find. For instance, betwe
en 0 and 100 there are 25 primes but between 10,000,000 and 10,000,100 there are only 2 prime numbers. In 1791, when he was just fourteen years old, Carl Gauss predicted the approximate manner in which the frequency of prime numbers among all the other numbers would diminish. The formula was reasonably accurate but always seemed slightly to overestimate the true distribution of primes. Testing for primes up to a million, a billion or a trillion would always show that Gauss’s formula was marginally too generous and mathematicians were strongly tempted to believe that this would hold true for all numbers up to infinity, and thus was born the overestimated prime conjecture.

  Then, in 1914, J.E. Littlewood, G.H. Hardy’s collaborator at Cambridge, proved that in a sufficiently large regime Gauss’s formula would underestimate the number of primes. In 1955 S. Skewes showed that the underestimate would occur sometime before reaching the number

  This is a number beyond the imagination, and beyond any practical application. Hardy called Skewes’s number ‘the largest number which has ever served any definite purpose in mathematics’. He calculated that if one played chess with all the particles in the universe (1087), where a move meant simply interchanging any two particles, then the number of possible games was roughly Skewes’s number.

  There was no reason why Fermat’s Last Theorem should not turn out to be as cruel and deceptive as Euler’s conjecture or the overestimated prime conjecture.

  The Graduate

  In 1975 Andrew Wiles began his career as a graduate student at Cambridge University. Over the next three years he was to work on his Ph.D. thesis and in that way serve his mathematical apprenticeship. Each student was guided and nurtured by a supervisor and in Wiles’s case that was the Australian John Coates, a professor at Emmanuel College, originally from Possum Brush, New South Wales.

  Coates still recalls how he adopted Wiles: ‘I remember a colleague told me that he had a very good student who was just finishing part III of the mathematical tripos, and he urged me to take him as a student. I was very fortunate to have Andrew as a student. Even as a research student he had very deep ideas and it was always clear that he was a mathematician who would do great things. Of course, at that stage there was no question of any research student starting work directly on Fermat’s Last Theorem. It was too difficult even for a thoroughly experienced mathematician.’

  For the past decade everything Wiles had done was directed towards preparing himself to meet Fermat’s challenge, but now that he had joined the ranks of the professional mathematicians he had to be more pragmatic. He remembers how he had to temporarily surrender his dream: ‘When I went to Cambridge I really put aside Fermat. It’s not that I forgot about it – it was always there – but I realised that the only techniques we had to tackle it had been around for 130 years. It didn’t seem that these techniques were really getting to the root of the problem. The problem with working on Fermat was that you could spend years getting nowhere. It’s fine to work on any problem, so long as it generates interesting mathematics along the way – even if you don’t solve it at the end of the day. The definition of a good mathematical problem is the mathematics it generates rather than the problem itself.’

  It was John Coates’s responsibility to find Andrew a new obsession, something which would occupy his research for at least the next three years. ‘I think all a research supervisor can do for a student is try and push him in a fruitful direction. Of course, it’s impossible to be sure what is a fruitful direction in terms of research but perhaps one thing that an older mathematician can do is use his horse sense, his intuition of what is a good area, and then it’s really up to the student as to how far he can go in that direction.’ In the end Coates decided that Wiles should study an area of mathematics known as elliptic curves. This decision would eventually prove to be a turning point in Wiles’s career and give him the techniques he would require for a new approach to tackling Fermat’s Last Theorem.

  The name ‘elliptic curves’ is somewhat misleading for they are neither ellipses nor even curved in the normal sense of the word. Rather they are any equations which have the form

  They got their name because in the past they were used to measure the perimeters of ellipses and the lengths of planetary orbits, but for clarity I will simply refer to them as elliptic equations rather than elliptic curves.

  The challenge with elliptic equations, as with Fermat’s Last Theorem, is to figure out if they have whole number solutions, and, if so, how many. For example, the elliptic equation

  has only one set of whole number solutions, namely

  Proving that this elliptic equation has only one set of whole number solutions is an immensely difficult task, and in fact it was Pierre de Fermat who discovered the proof. You might remember that in Chapter 2 it was Fermat who proved that 26 is the only number in the universe sandwiched between a square and a cube number. This is equivalent to showing that the above elliptic equation has only one solution, i.e. 52 and 33 are the only square and cube that differ by 2, and therefore 26 is the only number that can be sandwiched between two such numbers.

  What makes elliptic equations particularly fascinating is that they occupy a curious niche between other simpler equations which are almost trivial and other more complicated equations which are impossible to solve. By simply changing the values of a, b and c in the general elliptic equation mathematicians can generate an infinite variety of equations, each one with its own characteristics, but all of them just within the realm of solubility.

  Elliptic equations were originally studied by the ancient Greek mathematicians, including Diophantus who devoted large parts of his Arithmetica to exploring their properties. Probably inspired by Diophantus, Fermat also took up the challenge of elliptic equations, and, because they had been studied by his hero, Wiles was happy to explore them further. Even after two thousand years elliptic equations still offered formidable problems for students such as Wiles: ‘They are very far from being completely understood. There are many apparently simple questions I could pose on elliptic equations that are still unresolved. Even questions that Fermat himself considered are still unresolved. In some way all the mathematics that I’ve done can trace its ancestry to Fermat, if not Fermat’s Last Theorem.’

  In the equations which Wiles studied as a graduate student, determining the exact number of solutions was so difficult that the only way to make any progress was to simplify the problem. For example, the following elliptic equation is almost impossible to tackle directly:

  The challenge is to figure out how many whole number solutions there are to the equation. One fairly trivial solution is x = 0 and y = 0:

  A slightly more interesting solution is x = 1 and y = 0:

  There may be other solutions but, with an infinite quantity of whole numbers to investigate, giving a complete list of solutions to this particular equation is an impossible task. A simpler task is to look for solutions within a finite number space, so-called clock arithmetic.

  Earlier we saw how numbers can be thought of as marks along the number line which extends to infinity, as shown in Figure 13. To make the number space finite, clock arithmetic involves truncating the line and looping it back on itself to form a number ring as opposed to a number line. Figure 14 shows a 5-clock, where the number line has been truncated at 5 and looped back to 0. The number 5 vanishes and becomes equivalent to 0, and therefore the only numbers in 5-clock arithmetic are 0, 1, 2, 3, 4.

  Figure 13. Conventional arithmetic can be thought of as movements up and down the number line.

  In normal arithmetic we can think of addition as moving along the line a certain number of spaces. For example, 4 + 2 = 6 is the same as saying: begin at 4, and move along the number line 2 spaces, and arrive at 6.

  However, in 5-clock arithmetic:

  This is because if we start at 4 and move round 2 spaces then we arrive back at 1. Clock arithmetic might appear unfamiliar but in fact, as the name suggests, it is used every day when people discuss the time. Four hours after 11 o’cloc
k (that is to say, 11 = 4) is generally not called 15 o’clock, but rather 3 o’clock. This is 12-clock arithmetic.

  As well as addition we can perform all the other common mathematical operations, such as multiplication. In 12-clock arithmetic 5 × 7 = 11. This multiplication can be thought of as follows: if you start at 0, then move along 5 lots of 7 spaces, you will eventually arrive at 11. Although this is one way of thinking about multiplication in clock arithmetic, there are short cuts which speed up calculations. For example, to calculate 5 × 7 in 12-clock arithmetic, we can begin by just working out the normal result which is 35. We then divide 35 by 12 and work out the remainder, which is the answer to the original question. So 12 goes into 35 only twice, with a remainder of 11, and sure enough 5 × 7 in 12-clock arithmetic is 11. This is equivalent to imagining going around the clock twice and still having 11 spaces to travel.

  Because clock arithmetics only deal with a limited number space, it is relatively easy to work out all the possible solutions to an elliptic equation for a given clock arithmetic. For example, working in 5-clock arithmetic it is possible to list all the possible solutions to the elliptic equation

  Figure 14. In 5-clock arithmetic the number line is truncated at 5 and looped back on itself. The number 5 coincides with 0, and therefore is replaced by it.