Moreover, digital files controlled centrally are subject to curious dangers. In July 2009 users of Amazon’s Kindle reading device who had purchased a particular edition of 1984 by George Orwell discovered that it had vanished overnight. The disappearance was the consequence of a copyright wrangle, but the precise grounds hardly mattered. That it should have been Orwell’s book – in which a totalitarian government makes dissidents vanish by dropping them down a ‘memory hole’ – made the situation that much more bleakly ironic, but the basic truth was bad enough: a digital book could be removed from the Kindle device on instruction from a central location. Under what other circumstances might a text be pulled? If a book contained material that embarrassed the government, might it, too, disappear?

  The question ought to be ridiculous, but the remarkable flexibility of any number of companies when it comes to government requests became obvious in the course of the Wikileaks revelations in late 2010 and early 2011. Bank of America, Western Union, MasterCard, VISA Europe and PayPal responded to pressure from the Obama administration and cut off funding to Wikileaks on request. Amazon dropped the site from its hosting service. Wikileaks is a troublesome organization and one that undertakes actions of which many people do not approve. The way in which it was assailed, however, should give anyone pause who believes that a digital archive will necessarily remain inviolate in the hands of a private company. (That said, I suppose we should be grateful the suggestions of some US legislators were ignored: at least one person suggested Julian Assange should be the target of a drone strike. He was at the time resident in central London.)

  So, what to do? How do you preserve a text stored in a mutable form? Obviously, you duplicate it and store the copies all over the place. Wikileaks, again, understood that the only way to be certain their trove of cables could not be sequestered or destroyed was to distribute it widely. The same logic applies to the global digital library; a real Gutenberg moment for the digital age is not the creation of a single database with a single gatekeeper, but the birth of any number of digital libraries accessible all over the world. For greater security of the contents, we’d also vary the format in which they were stored, making it harder for a virus to wreak havoc on them all. (You might argue that that brings its own problems of access, and it’s true, but it is also true that it can be seen as insurance against the obsolescence of a given format. It’s an interesting sidebar.)

  The main debate comes back, in the end, to Marshall McLuhan: what is the message of a given technological choice? What does it mean to have a single digital library controlled from within the United States by a tech giant based in California, as opposed to having a vast number of libraries around the world, each of them containing the same media and each widely accessible? Which proposes democracy, and which leaves the door open to less amenable forms of governance? When we’re structuring our technologies – and the systems that support them and grow from them – we have to choose, over and over again, the path that emphasizes the society we want.

  It isn’t possible – at least, it’s not humanly possible, and not technologically possible yet – to map perfectly the consequences of an attempt to incentivize or disincentivize a given behaviour. Governments attempt it constantly using tax regulation, and the results are rarely if ever what was intended. Part of the understanding regarding the bank bailouts in the UK was that the banks would start lending to businesses again to encourage growth, but that isn’t happening either. Incentives that seem clear-cut can be more complex when you put them in context. For example, returning to Freakonomics for a moment: capital punishment is supposed to be a deterrent to serious crime. However, according to Levitt and Dubner, the annual execution rate among death row detainees in New York – and let’s not forget that a percentage of these will be innocent of the crime of which they are accused, so there’s a measurable chance that if you commit a crime, someone else will die for it – is 2 per cent. That compares favourably with the 7 per cent annual chance of dying just because a person is a member of the Black Gangster Disciple Nation crack gang. (And that’s before you factor in the point I made earlier, that a system of social controls relies on positives as well as negatives, and many of those who commit serious crimes may well have crossed that Rubicon already, feeling that society has no path for them.)

  Getting incentives flowing in the right direction is hard. Levitt and Dubner also cite a study that showed mechanics allowing cars that failed a clean air standard to pass because lenient mechanics got repeat business: great for the car business, good for drivers whose need or desire for their car outweighs their understanding of pollution and its negative consequences, but bad for the economy because those negatives are probably more expensive and bad for public health (which obviously is also an economic factor).

  That said, the other thing you can learn from reading Freakonomics is that looking hard at the information in an intelligent way will help you track down situations where the incentives are all wrong. And a digital era attitude to how systems work, one that says that no model is perfect first time, but that a model can be made better and better by refinement, might be able to make it work. It would probably require that we as human beings learn to function intelligently as a group rather than in our own individual short-term favour, something else that becomes easier as digital technology allows you to appreciate your place in a larger tapestry, and why it’s better for you to accept a short-term loss in quest for a longer term gain.

  Fundamentally, in reaching for a better world, we have to code for it: we have to create the technologies that will foster the culture and institutions that will produce it, and legislate for the climate that will allow it. It is not enough to cross our fingers and hope, now that we no longer have to.

  Choice is a new experience for us in the context of social change. We have until now been part of a process of change prompted by climatic shifts and migrations, famines, warlords and technological advances: not an inexorable progress towards a ‘higher form’ of social life, but a sort of ragged wandering. Furthermore, in case it isn’t obvious, the traits that are evolutionarily successful are not always those that we would desire for our societies. In biological evolution, a shining example of the cruelty of the selfish gene is the chicken. All but flightless, also tasty and nutritious, the chicken is the ideal domesticated bird. In consequence, the reported global chicken population in 2003 was 24 billion, making it the most successful avian in biological history. Any given chicken, of course, can expect to be eaten at any time, and a vast number of them live in appalling conditions in factory farms. The DNA doesn’t care as long as they breed.

  Similarly, if there is a process akin to evolution that applies to societies, it does not necessarily select for mass happiness. If the population’s unhappiness makes them more likely to survive hard conditions or destroy a competitor, that will do fine. Nor, indeed, is such a process a guarantee of our own society’s survivability. We could be a dead-end. Nonetheless, accounts of history have tended – sometimes unconsciously, sometimes with reference to a teleological theory like Marx’s historical materialism in which all roads eventually lead to perfect socialism – to suggest a kind of upward progress, and that notion is lodged in the popular consciousness. Francis Fukuyama proposed the end of history in 1992, suggesting that henceforth all change would be technological: the advent of liberal democracy meant the end of sociopolitical evolution. But it’s not clear why that should be the case. The neatness of teleological ideas of history and the future makes me suspect them; and then, too, they’re so self-congratulatory: we’re better in every way than those who came before us.

  It seems more likely to me that human social change to this point has been a drift, a series of actions and reactions, mostly without any sense of what the outcome of a given choice would be. Our grasp of what’s going on is still fragmentary, but we can increasingly spot correlations and trends, and, importantly, we can spot them while they’re happening rather than long after. Not all o
f them are immensely sophisticated; some are almost embarrassingly biological. For example, a recent study by researchers at Cornell University analysed half a billion postings from 2.5 million Twitter users, and established a common emotional shape to their days, varying by day of the week. In other words, there appears to be a predictable pattern of positivity at certain times of day, negativity at others, which extends across the eighty-four countries represented in the data.4 It’s simple enough, but consider for a moment what it may mean to someone who is trying to sell you something. A study of different kinds of people will reveal whether they buy when they’re happy because they feel confident or when they’re feeling tired and cranky because it cheers them up. Advertising that reaches the right person at the right moment will be more effective, especially in a connected world where purchase can happen at almost the moment of desire. Cross-referenced with a closer understanding of the kind of person someone is gleaned from their social media information, and a database of like and dislike matches, perhaps a credit score, and you have a slightly alarming persuasion engine which knows when and where we are most vulnerable to an ‘impulse purchase’, or can guess how to get a more stolid soul to choose ‘yes’ in the longer term.

  Leaving aside the obnoxiousness of a system that can get you to fill your house with consumer junk you might have decided you didn’t need if left alone, we’re presently painfully well acquainted with the macro-effects of irrational and over-aspirational decisions taken under the tutelage of corporate institutions such as credit card companies and banks who were perhaps looking to their own bottom line rather more than to the best possible advice for the individual. The wrong kind of successful persuasion could leave us with even more ghastly financial wreckage.

  At this moment, there is a kind of arms race taking place as we learn more about ourselves and what influences us – and we, as individuals, are definitely not in the lead. The world’s big companies, and most particularly those that control or have access to Internet data, presently know more about how we buy than we do, because, as in Google’s case, or Facebook’s or Amazon’s, the business of their day-to-day function entails the gathering of huge amounts of raw data which they have the know-how to collate. That’s to say that they know more about what influences us to buy, how we make the final ‘yes’ decision, and conversely what’s likely to prevent us from doing so. They know that people do, ridiculously, buy more items priced at 99p than they would items priced at £1 (in behavioural economics, the phenomenon is called ‘left-digit bias’). They know that we take undue notice of anchor prices and indeed of random numbers that look similar to prices in assessing whether something gives good value. They formulate ways of saving such as Amazon Prime, which for a single annual payment gives you ‘free’ one-day delivery – a bargain that results in people buying more and more from Amazon to offset the fee, so that what appears to be to the company’s disadvantage ultimately works out in Amazon’s favour.

  Like casinos, large corporate entities have studied the numbers and the ways in which people respond to them. These are not con tricks – they’re not even necessarily against our direct interests, although sometimes they can be – but they are hacks for the human mind, ways of manipulating us into particular decisions we otherwise might not make. They are also, in a way, deliberate underminings of the core principle of the free market, which derives its legitimacy from the idea that informed self-interest on aggregate sets appropriate prices for items. The key word is ‘informed’; the point of behavioural economics – or, rather, of its somewhat buccaneering corporate applications – is to skew our perception of the purchase to the advantage of the company. The overall consequence of that is to tilt the construction of our society away from what it should be if we were making the rational decisions classical economics imagines we would, and towards something else.

  Governments, too, are looking to the technologies of ‘nudging’ to push us towards certain behaviours and away from others, trying, perhaps, to push back the ghost of the classical Greek orator Isocrates: ‘Democracy destroys itself because it abuses its right to freedom and equality. Because it teaches its citizens to consider audacity as a right, lawlessness as a freedom, abrasive speech as equality, and anarchy as progress.’ But the idea of nudging really means subconsciously influencing the voting population to things they might otherwise not choose, rather than arguing the case directly. There’s a very fine line here: the feedback system of road signs that tell you your speed is, obviously, a hint to slow down, but it is overt and leaves you with a clear choice. The classic example of ‘nudging’ is making a decision ‘opt out’ rather than ‘opt in’ – organ donation, for instance, or indeed the original Google Book Settlement. In a governmental context it’s an Orwellian style of dictatorship that, by presenting heavily skewed choices or options that involve irritating effort versus others that do not and avoiding genuinely difficult balanced choices between two possible courses of action, effectively makes the population practise docility until it becomes nicely habitual. Like cross-subsidization, ‘libertarian paternalism’ is the quiet death of genuine choice, the creation of a society in which a class of supposed experts rules over a common herd. It also throws away any hope of using the power of the crowd to make good decisions, because it supplies the crowd with bad basic information.

  But information about our behaviour is also, increasingly, available to us. It is possible to become what some psychologists call ‘test-conscious’: aware of the tricks, and therefore while not immune to them at least capable of understanding that they are being played and looking for ways of compensating. Once again, we’re discussing a kind of feedback, and once again, it comes from our new ability to watch ourselves in the digital mirror at various different levels – as individuals or as large groups. We can learn to use our knowledge of what will influence us to nudge ourselves towards good decisions, thus creating a virtuous cycle. We need, ultimately, to favour the creation of systems around us that foster informed, intelligent decisions, not the ones that increase the profit margin of companies at the expense of our own good outcomes or bow automatically to the will of a government that may or may not know what it’s doing. In the shorter term, we need to learn to make good choices even though we are also irrational, and to educate and train ourselves in the habit of decision.

  By way of example: Dan Ariely describes the pricing structure of US subscriptions to the Economist magazine – $59 for Internet-only, $125 for a print-only subscription, and $125 for a combined sub – and breaks it down as follows: ‘most people don’t know what they want unless they see it in context … the Economist’s marketers offered us a no-brainer: relative to the print option, the print-and-Internet option looks clearly superior.’ But the only reason the combination option looks so strong is the presence of the print-only option. In fact, that comparison is the sole reason for the existence of the print-only option. It’s there to make you think the combination is a bargain. Consider it again without the print option: $59 for an Internet subscription, or $125 for a print-and-Internet combination. Even having gone through the whole process openly, do you suddenly find the choice slightly different? I have read and re-read that section in Ariely’s book. I have used the example repeatedly to friends and in professional discussions of ebook pricing. And yet to me, as I type this, the equation has still changed. Abruptly, the combination option seems more expensive than it did at the beginning of this paragraph.

  Perhaps the most important thing is not to get good at making decisions, but to remember to want the opportunity to do so. On 13 July 2011 New Scientist magazine carried an item on Prodcast, a software system created by Microsoft to help you decide when to buy technological goods. The logic is unassailable: new technological gizmos decrease in price over time and eventually are replaced. There is therefore a sort of perfect window where they’re as cheap as they’re going to get without being outmoded (although ‘outmoded’ is a harsh description for a fully functional item that is frac
tionally slower than the newer version or takes slightly lower resolution pictures). Prodcast tells you when to strike, while other expert systems advise on what and where to buy.

  Taken together, these systems seem to imply that human intervention in the decision is rather unnecessary. The same idea resonates in Eric Schmidt’s assessment of Google’s ability to predict what a user is thinking and in Amazon’s recommendations: the concept is that the system knows what you will think before you think it, but of course it’s somewhat self-fulfilling, like the old trick of telling someone not to think about purple elephants. It goes beyond advertising, which to me has a sense of being broadcast; it’s an offer of a specific product driven by an analysis of you. The next step in the seamless interaction of the system and your environment is a kind of checklist: Are you sure you don’t want object x? Well, no, I suppose I’m not sure I don’t want it … Maybe I do … And indeed, my online supermarket does exactly that: before I can check out, I have to run a gauntlet of helpful suggestions of things I may have forgotten and things I haven’t tried but which the software is offering me on the basis of an analysis of purchasing patterns across the board.

  One step beyond that is the creation of a bank account to which the software has access. You could set up a robot that would check all three variables against whether you had sufficient funds, and place an order if you did, and your shiny new product would arrive before you even realized that you wanted it. It would probably send you an email to let you know the item was ordered, so that you didn’t accidentally buy one in the meantime. Life without the stress of discovering your own amusements; new things just arriving for you to play with. If we follow this road, will we lose, instead of decision-making ability, the habit of volition? Long-term prison inmates eventually lose the habit of opening doors for themselves. Instead they stand and wait for a warden to unlock the door and tell them to step through. It starts to look a little like the nightmare scenario, but it’s a consequence of choice. If we ever arrive at that place, we will have chosen it. It’s not inherent in the technology – but it may be an aspect of our culture. The problem with that, of course, is that the choice is clouded and concealed. We don’t see it for what it is – although we are perhaps beginning to.