Learning-strategies have been used in some chess-playing computer programs. These programs actually get better as they play against human opponents or against other computers. Although they are equipped with a repertoire of rules and tactics, they also have a small random tendency built into their decision procedure. They record past decisions, and whenever they win a game they slightly increase the weighting given to the tactics that preceded the victory, so that next time they are a little bit more likely to choose those same tactics again.
One of the most interesting methods of predicting the future is simulation. If a general wishes to know whether a particular military plan will be better than alternatives, he has a problem in prediction. There are unknown quantities in the weather, in the morale of his own troops, and in the possible countermeasures of the enemy. One way of discovering whether it is a good plan is to try and see, but it is undesirable to use this test for all the tentative plans dreamed up, if only because the supply of young men prepared to die 'for their country' is exhaustible, and the supply of possible plans is very large. It is better to try the various plans out in dummy runs rather than in deadly earnest. This may take the form of full-scale exercises with 'Northland' fighting 'Southland' using blank ammunition, but even this is expensive in time and materials. Less wastefully, war games may be played, with tin soldiers and little toy tanks being shuffled around a large map.
Recently, computers have taken over large parts of the simulation function, not only in military strategy, but in all fields where prediction of the future is necessary, fields like economics, ecology, sociology, and many others. The technique works like this. A model of some aspect of the world is set up in the computer. This does not mean that if you unscrewed the lid you would see a little miniature dummy inside with the same shape as the object simulated. In the chess-playing computer there is no 'mental picture' inside the memory banks recognizable as a chess board with knights and pawns sitting on it. The chess board and its current position would be represented by lists of electronically coded numbers. To us a map is a miniature scale model of a part of the world, compressed into two dimensions. In a computer, a map might alternatively be represented as a list of towns and other spots, each with two numbers-its latitude and longitude. But it does not matter how the computer actually holds its model of the world in its head, provided that it holds it in a form in which it can operate on it, manipulate it, do experiments with it, and report back to the human operators in terms which they can understand. Through the technique of simulation, model battles can be won or lost, simulated airliners fly or crash, economic policies lead to prosperity or to ruin. In each case the whole process goes on inside the computer in a tiny fraction of the time it would take in real life. Of course there are good models of the world and bad ones, and even the good ones are only approximations. No amount of simulation can predict exactly what will happen in reality, but a good simulation is enormously preferable to blind trial and error. Simulation could be called vicarious trial and error, a term unfortunately pre-empted long ago by rat psychologists.
If simulation is such a good idea, we might expect that survival machines would have discovered it first. After all, they invented many of the other techniques of human engineering long before we came on the scene: the focusing lens and the parabolic reflector, frequency analysis of sound waves, servo-control, sonar, buffer storage of incoming information, and countless others with long names, whose details don't matter. What about simulation? Well, when you yourself have a difficult decision to make involving unknown quantities in the future, you do go in for a form of simulation. You imagine what would happen if you did each of the alternatives open to you. You set up a model in your head, not of everything in the world, but of the restricted set of entities which you think may be relevant. You may see them vividly in your mind's eye, or you may see and manipulate stylized abstractions of them. In either case it is unlikely that somewhere laid out in your brain is an actual spatial model of the events you are imagining. But, just as in the computer, the details of how your brain represents its model of the world are less important than the fact that it is able to use it to predict possible events. Survival machines that can simulate the future are one jump ahead of survival machines who can only learn on the basis of overt trial and error. The trouble with overt trial is that it takes time and energy. The trouble with overt error is that it is often fatal. Simulation is both safer and faster.
The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology. There is no reason to suppose that electronic computers are conscious when they simulate, although we have to admit that in the future they may become so. Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself. Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be 'self-awareness', but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress-if there is a model of the model, why not a model of the model of the model...?
Whatever the philosophical problems raised by consciousness, for the purpose of this story it can be thought of as the culmination of an evolutionary trend towards the emancipation of survival machines as executive decision-takers from their ultimate masters, the genes. Not only are brains in charge of the day-to-day running of survival-machine affairs, they have also acquired the ability to predict the future and act accordingly. They even have the power to rebel against the dictates of the genes, for instance in refusing to have as many children as they are able to. But in this respect man is a very special case, as we shall see.
What has all this to do with altruism and selfishness? I am trying to build up the idea that animal behaviour, altruistic or selfish, is under the control of genes in only an indirect, but still very powerful, sense. By dictating the way survival machines and their nervous systems are built, genes exert ultimate power over behaviour. But the moment-to-moment decisions about what to do next are taken by the nervous system. Genes are the primary policy-makers; brains are the executives. But as brains became more highly developed, they took over more and more of the actual policy decisions, using tricks like learning and simulation in doing so. The logical conclusion to this trend, not yet reached in any species, would be for the genes to give the survival machine a single overall policy instruction: do whatever you think best to keep us alive.
Analogies with computers and with human decision-taking are all very well. But now we must come down to earth and remember that evolution in fact occurs step-by-step, through the differential survival of genes in the gene pool. Therefore, in order for a behaviour pattern-altruistic or selfish-to evolve, it is necessary that a gene 'for' that behaviour should survive in the gene pool more successfully than a rival gene or allele for some different behaviour. A gene for altruistic behaviour means any gene that influences the development of nervous systems in such a way as to make them likely to behave altruistically. Is there any experimental evidence for the genetic inheritance of altruistic behaviour? No, but that is hardly surprising, since little work has been done on the genetics of any behaviour. Instead, let me tell you about one study of a behaviour pattern which does not happen to be obviously altruistic, but which is complex enough to be interesting. It serves as a model for how altruistic behaviour might be inherited.
Honey bees suffer from an infectious disease called foul brood. This attacks the grubs in their cells. Of the domestic breeds used by beekeepers, some are more at risk from foul brood than others, and it turns out that the difference between strains is, at least in some cases, a behavioural one. There are so-called hygienic strains which quickly stamp out epidemics by locating infe
cted grubs, pulling them from their cells and throwing them out of the hive. The susceptible strains are susceptible because they do not practise this hygienic infanticide. The behaviour actually involved in hygiene is quite complicated. The workers have to locate the cell of each diseased grub, remove the wax cap from the cell, pull out the larva, drag it through the door of the hive, and throw it on the rubbish tip.
Doing genetic experiments with bees is quite a complicated business for various reasons. Worker bees themselves do not ordinarily reproduce, and so you have to cross a queen of one strain with a drone (= male) of the other, and then look at the behaviour of the daughter workers. This is what W. C. Rothenbuhler did. He found that all first-generation hybrid daughter hives were non-hygienic: the behaviour of their hygienic parent seemed to have been lost, although as things turned out the hygienic genes were still there but were recessive, like human genes for blue eyes. When Rothenbuhler 'back-crossed' first-generation hybrids with a pure hygienic strain (again of course using queens and drones), he obtained a most beautiful result. The daughter hives fell into three groups. One group showed perfect hygienic behaviour, a second showed no hygienic behaviour at all, and the third went half way. This last group uncapped the wax cells of diseased grubs, but they did not follow through and throw out the larvae. Rothenbuhler surmised that there might be two separate genes, one gene for uncapping, and one gene for throwing-out. Normal hygienic strains possess both genes, susceptible strains possess the alleles-rivals- of both genes instead. The hybrids who only went halfway presumably possessed the uncapping gene (in double dose) but not the throwing-out gene. Rothenbuhler guessed that his experimental group of apparently totally non-hygienic bees might conceal a subgroup possessing the throwing-out gene, but unable to show it because they lacked the uncapping gene. He confirmed this most elegantly by removing caps himself. Sure enough, half of the apparently non-hygienic bees thereupon showed perfectly normal throwing-out behaviour.
This story illustrates a number of important points which came up in the previous chapter. It shows that it can be perfectly proper to speak of a 'gene for behaviour so-and-so' even if we haven't the faintest idea of the chemical chain of embryonic causes leading from gene to behaviour. The chain of causes could even turn out to involve learning. For example, it could be that the uncapping gene exerts its effect by giving bees a taste for infected wax. This means they will find the eating of the wax caps covering disease-victims rewarding, and will therefore tend to repeat it. Even if this is how the gene works, it is still truly a gene for uncapping provided that, other things being equal, bees possessing the gene end up by uncapping, and bees not possessing the gene do not uncap.
Secondly it illustrates the fact that genes 'cooperate' in their effects on the behaviour of the communal survival machine. The throwing-out gene is useless unless it is accompanied by the uncapping gene and vice versa. Yet the genetic experiments show equally clearly that the two genes are in principle quite separable in their journey through the generations. As far as their useful work is concerned you can think of them as a single cooperating unit, but as replicating genes they are two free and independent agents.
For purposes of argument it will be necessary to speculate about genes 'for' doing all sorts of improbable things. If I speak, for example, of a hypothetical gene 'for saving companions from drowning', and you find such a concept incredible, remember the story of the hygienic bees. Recall that we are not talking about the gene as the sole antecedent cause of all the complex muscular contractions, sensory integrations, and even conscious decisions, that are involved in saving somebody from drowning. We are saying nothing about the question of whether learning, experience, or environmental influences enter into the development of the behaviour. All you have to concede is that it is possible for a single gene, other things being equal and lots of other essential genes and environmental factors being present, to make a body more likely to save somebody from drowning than its allele would. The difference between the two genes may turn out at bottom to be a slight difference in some simple quantitative variable. The details of the embryonic developmental process, interesting as they may be, are irrelevant to evolutionary considerations. Konrad Lorenz has put this point well.
The genes are master programmers, and they are programming for their lives. They are judged according to the success of their programs in coping with all the hazards that life throws at their survival machines, and the judge is the ruthless judge of the court of survival. We shall come later to ways in which gene survival can be fostered by what appears to be altruistic behaviour. But the obvious first priorities of a survival machine, and of the brain that takes the decisions for it, are individual survival and reproduction. All the genes in the 'colony' would agree about these priorities. Animals therefore go to elaborate lengths to find and catch food; to avoid being caught and eaten themselves; to avoid disease and accident; to protect themselves from unfavourable climatic conditions; to find members of the opposite sex and persuade them to mate; and to confer on their children advantages similar to those they enjoy themselves. I shall not give examples-if you want one just look carefully at the next wild animal that you see. But I do want to mention one particular kind of behaviour because we shall need to refer to it again when we come to speak of altruism and selfishness. This is the behaviour that can be broadly labelled communication. A survival machine may be said to have communicated with another one when it influences its behaviour or the state of its nervous system. This is not a definition I should like to have to defend for very long, but it is good enough for present purposes. By influence I mean direct causal influence. Examples of communication are numerous: song in birds, frogs, and crickets; tail-wagging and hackle-raising in dogs; 'grinning' in chimpanzees; human gestures and language. A great number of survival-machine actions promote their genes' welfare indirectly by influencing the behaviour of other survival machines. Animals go to great lengths to make this communication effective. The songs of birds enchant and mystify successive generations of men. I have already referred to the even more elaborate and mysterious song of the humpback whale, with its prodigious range, its frequencies spanning the whole of human hearing from subsonic rumblings to ultrasonic squeaks. Mole-crickets amplify their song to stentorian loudness by singing down in a burrow which they carefully dig in the shape of a double exponential horn, or megaphone. Bees dance in the dark to give other bees accurate information about the direction and distance of food, a feat of communication rivalled only by human language itself.
The traditional story of ethologists is that communication signals evolve for the mutual benefit of both sender and recipient. For instance, baby chicks influence their mother's behaviour by giving high piercing cheeps when they are lost or cold. This usually has the immediate effect of summoning the mother, who leads the chick back to the main clutch. This behaviour could be said to have evolved for mutual benefit, in the sense that natural selection has favoured babies that cheep when they are lost, and also mothers that respond appropriately to the cheeping.
If we wish to (it is not really necessary), we can regard signals such as the cheep call as having a meaning, or as carrying information: in
this case 'I am lost.' The alarm call given by small birds, which I mentioned in Chapter i, could be said to convey the information 'There is a hawk.' Animals who receive this information and act on it are benefited. Therefore the information can be said to be true. But do animals ever communicate false information; do they ever tell lies?
The notion of an animal telling a lie is open to misunderstanding, so I must try to forestall this. I remember attending a lecture given by Beatrice and Allen Gardner about their famous 'talking' chimpanzee Washoe (she uses American Sign Language, and her achievement is of great potential interest to students of language). There were some philosophers in the audience, and in the discussion after the lecture they were much exercised by the question of whether Washoe could tell a lie. I suspected that the Gardners thought the
re were more interesting things to talk about, and I agreed with them. In this book I am using words like 'deceive' and 'lie' in a much more straightforward sense than those philosophers. They were interested in conscious intention to deceive. I am talking simply about having an effect functionally equivalent to deception. If a bird used the 'There is a hawk' signal when there was no hawk, thereby frightening his colleagues away, leaving him to eat all their food, we might say he had told a lie. We would not mean he had deliberately intended consciously to deceive. All that is implied is that the liar gained food at the other birds' expense, and the reason the other birds flew away was that they reacted to the liar's cry in a way appropriate to the presence of a hawk.
Many edible insects, like the butterflies of the previous chapter, derive protection by mimicking the external appearance of other distasteful or stinging insects. We ourselves are often fooled into thinking that yellow and black striped hover-flies are wasps. Some bee-mimicking flies are even more perfect in their deception. Predators too tell lies. Angler fish wait patiently on the bottom of the sea, blending in with the background. The only conspicuous part is a wriggling worm-like piece of flesh on the end of a long 'fishing rod', projecting from the top of the head. When a small prey fish comes near, the angler will dance its worm-like bait in front of the little fish, and lure it down to the region of the angler's own concealed mouth. Suddenly it opens its jaws, and the little fish is sucked in and eaten. The angler is telling a lie, exploiting the little fish's tendency to approach wriggling worm-like objects. He is saying 'Here is a worm', and any little fish who 'believes' the lie is quickly eaten.