This is the happy valley, the high plateau of technological culture.

  We are culturally and perhaps as a species predisposed to give more attention to bad occurrences than good ones – possibly because, in a survival environment, from which none of us is many generations removed and through which we all to some extent move all the time – being relaxed about serious threats results in death. A predisposition towards watchfulness is a survival trait. In other words, if you find yourself thinking that the nightmare I’ve drawn is infinitely more plausible than the happy valley, take a moment to consider whether that’s really the case. Both draw on trends and technologies that already exist; both would require significant shifts in the way we live to come true. It’s hard to balance a horror and a dream without making the latter look specious or diluting it to the point where it is no longer as positive an outcome as the nightmare is negative.

  Detractors of the digital technologies with which we live lament the practice of digital skim-reading, and worry that while it is in its own right a useful skill, it does not substitute for ‘deep reading’, the more focused, uninterrupted form of information intake and cognition which was common twenty years ago. Hypertext – text with connections to other texts and data built in, in the style of the World Wide Web – is apparently a lousy medium for focusing on what’s written in a given piece; some studies show decreased comprehension in readers of a document with links as opposed to those issued with a plain text version, because, among other things, the brain apparently has a maximum ‘cognitive load’ of a relatively small number of topics which can be held in the working memory at a time. Hypertext, with its multiple pathways, simply throws too much at the working memory, and comprehension and retention suffer. Since the reading brain and the habits of thought which go with it are central to our present human identity, the question of how this affects us is an important one: if our reading habits change – the written and read word being arguably a defining aspect of our cultural evolution and the formation of each of us as individuals – what change will be wrought on us and our world? On the other hand, if we resist that change, will we be unable to cope with the information-saturated environment we have made? Is it a question of losing who we are whatever we do?

  Meanwhile, the world we live in – despite being by some measures an extraordinary place – has some serious unsolved problems. Some of these, in specific sectors, are bound up with technology, but the majority and the worst are not – at least not directly. In 2008 we discovered that our financial markets had become so cluttered with bad loans that we’d inflated the system into a – historically familiar – giant bubble, which had burst. It then turned out that we couldn’t simply let the sin of hubris punish itself, because the same institutions which created this idiocy were deeply enmeshed in the day-to-day business of living. Banks had to be rescued, because their failure entailed the failure of industrial heavyweights on whom millions if not billions of jobs depended. Those banks were not too big to fail, but too embedded. The fairy-dust economics of the 2000s – in which global debts rose from $84 trillion to $185 trillion (yes, really) – is turning to stone in the cold light of dawn, but by some strange miracle it’s still impossible to regulate the sector to preclude a recurrence of the 2008 crisis without instantly provoking exactly that. The social media and even the conventional press buzz with frustration, and the Occupy movement has emerged, an international phenomenon made possible in part by rapid communication and self-identification; but no solutions are obvious yet, and the reaction from many quarters to the Occupy camps has been negative to the point of alarmingly oppressive.

  At the same time, many nations are seeing a decline in manufacturing, and while some thinkers herald this as the dawn of the Information Age and the Knowledge Economy, others are rather more cautious. Knowledge has always been the basis of industry, but by itself, it doesn’t actually make anything or put food on anyone’s table. As far as I can see – in the UK, at least – ‘post-industrial’ is shorthand for a finance-based economy like the one which recently imploded so excitingly when we accidentally established that it was made entirely of financial smoke and mirrors. Meanwhile, we face the curious spectacle of Warren Buffett telling the US President that the mega-rich in his country do not pay enough tax, and Google CEO Eric Schmidt agreeing that Google would happily pay more tax in the UK in order to operate here. On the flipside, charities in my home city say they are seeing a rise in homelessness, and some evidence seems to suggest that many of those made homeless are well-qualified people who cannot find enough work to live on.

  Overseas, Europe and the US are enmeshed in any number of small-to-medium violent conflicts, in most cases to protect our access to oil and rare earths needed to sustain our mode of living – a mode that is mostly mid-twentieth century, constructed around the automobile rather than the Internet. That petroleum lifestyle is killing the biosphere on which we depend (the only one to which we have access) while making us radically unpopular with large portions of the global population, who feel – not without some justification – that we export poverty, waste and violence and import money and resources. Some of the states in this relationship with us have begun to re-export violence in the form of terrorism, a bleakly ironic twist on conventional economics.

  At home, issues of race, religion, sexuality and gender remain poisonous, our governments demand greater and greater rights of surveillance over our lives and wish to curtail week by week the historic freedoms of assembly and speech which have marked our culture’s development. Trial by jury, habeas corpus and the rules of evidence are constantly assailed, as is the independence of the judiciary. In the aftermath of the riots which took place in the UK in the summer of 2011, David Cameron vowed that he would crack down on ‘phoney human rights’, which seems to mean any rights that are not convenient. At the same time, and despite evidence that it was both impractical and counterproductive, some MPs began to call for the government to be able to ‘pull the plug’ on the Internet and the cellphone network in times of civil unrest; a weird, desperate grasping for centralized power and control which seems alien to a modern government.

  Perhaps in consequence of this kind of disconnection, politicians are perceived as mendacious, governmental press releases as spin. The professional political class, in return, describes the electorate as apathetic, or unable to comprehend the issues. The standard response to a public outcry is not ‘we’ll change the policy’ but ‘clearly we’ve failed to get our plan across properly’. In the UK under the Blair government, two of the largest political demonstrations in modern British history took place on consecutive weekends – one against a ban on fox-hunting and another against the Iraq War – and were parlayed against one another into a stasis which meant both could be ignored. More generally, serious issues often go untackled or botched from fear of expending political capital on unpopular legislation in the face of tabloid scorn. Extremist political views are becoming more popular across Europe and in the US as mainstream political parties fail to speak substantively about what is going on, preferring instead to throw mud at one another.

  In other words, before we start to look at possible digital apocalypses, we have to acknowledge that the general picture is a lot less rosy than we tell ourselves when we’re brushing our teeth in the morning. In fact, we stagger from one crisis to the next, and while we are insulated in the industrialized world from some of them, we are by no means immune. Our prosperity and our civilized behaviour are fragile, our world is unequal and – for billions – bleakly callous.

  The opposing extremes I described – total immersion and passivity, and utopian liberty and creativity – are both unlikely. Patchwork is more probable than purity; if the late modern (the term post-modern has a number of meanings in different disciplines, some specific and others irksomely vague, and in any case suggests that we’re in some kind of afterparty of world history, which I think is untrue, so I use late modern, which means more or less what it sounds like and
doesn’t instantly cause me to break out in sociological hives) condition we inhabit has any rules, that must be one of them: everything is muddled together. What is unclear and indeed undecided is which futures will spread out and flourish and which will fade away. But neither extreme is technologically or societally impossible. We live in a time when boundaries of the possible are elastic, while our unconscious notions of what can and cannot be done remain lodged in a sort of spiritual 1972. Unless we can change that, we’re going to find the next twenty years even more unsettling than the last. Abandon, please, the idea that no one will ever be able to connect a computer directly with the human mind and consider instead what will happen when they do, and what choices we might – must – make to ensure that when it becomes common practice the world is made better rather than worse.

  Only one thing is impossible: that life should remain precisely as it is. Too many aspects of the society in which we presently live are unstable and unsustainable. Change is endemic already, but more is coming. This is for various reasons a time of decision.

  A word about navigation:

  The first section of this book deals with the common nightmares of digitization and attempts to assess how seriously we should take them and whether they really derive from digital technology or from elsewhere. It contains a potted history of the Internet and a brief sketch of my own relationship with technology from birth onwards, and asks whether our current infatuation with all things digital can last. It also examines the notion that our brains are being reshaped by the digital experience, and considers our relationship with technology and science in general.

  The second section considers the wrangles between the digital and the traditional world, looks at the culture of companies and advocates for digital change, and the advantages and disadvantages of digital as a way of being. It deals with notions of privacy and intellectual property, design and society, revolution and riot, and looks at how digitization changes things.

  The third section proposes a sort of self-defence for the new world and a string of tricks to help not only with any digital malaise but also with more general ones. It asks what it means to be authentic, and engaged, and suggests how we go forward from here in a way that makes matters better rather than worse (or even the same).

  More generally: it is inevitable that I will be wrong about any number of predictions. No book which tries to see the present and anticipate the future can be both interesting and consistently right. I can only hope to be wrong in interesting ways.

  PART I

  1

  Past and Present

  I WAS BORN in 1972, which means I am the same age as the first ever video game, Pong. I actually preferred Space Invaders; there was a classic wood-panelled box in the kebab shop at the end of my road, and if I was lucky I’d be allowed a very short go while my dad and my brother Tim picked up doner kebabs with spicy sauce for the whole family. In retrospect, they may have been the most disgusting kebabs ever made in the United Kingdom. When the weather’s cold, I miss them terribly.

  I grew up in a house which used early (room-sized) dedicated word-processing machines. I knew what a command line interface was from around the age of six (though I wouldn’t have called it that, because there was no need to differentiate it from other ways of interfacing with a computer which did not yet exist: I knew it as ‘the prompt’, because a flashing cursor prompted you to enter a command) and since my handwriting was moderate at best I learned to type fairly early on. Schools in London back then wouldn’t accept typed work from students, so until I was seventeen or so I had to type my work and then copy it out laboriously by hand. Exactly what merit there was in this process I don’t know: it seemed then and seems now to be a form of drudgery without benefit to anyone, since the teachers at the receiving end inevitably had to decipher my appalling penmanship, a task I assume required a long time and a large glass of Scotch.

  On the other hand, I am not what was for a while called a ‘digital native’. Cellphones didn’t really hit the popular market until the 1990s, when I was already an adult; personal computers were fairly unusual when I was an undergrad; I bought my first music in the form of vinyl LPs and cassette tapes. I can remember the battle between Betamax and VHS, and the arrival and rapid departure of LaserDisc. More, the house I lived in was a house of narratives. More than anything else, it was a place where stories were told. My parents read to me. My father made up stories to explain away my nightmares, or just for the fun of it. We swapped jokes over dinner, and guests competed – gently – to make one another laugh or gasp with a tall tale. Almost everything could be explained by, expressed in, parsed as, couched in a narrative. It was a traditional, even oral way of being, combined with a textual one in some situations, making use of new digital tools as they arrived, drawing them in and demanding more of them for the purpose of making a story. We weren’t overrun by technology. Technology was overrun by us.

  All of which makes me a liminal person, a sort of missing link. I have one foot in the pre-digital age, and yet during that age I was already going digital. More directly relevant to this book, my relationship with technology is a good one: I am a prolific but not excessive user of Twitter; I blog for my own website and for another one; I have played World of Warcraft for some years without becoming obsessive (I recently cancelled my subscription because the game has been made less and less sociable); I use Facebook, Google+, GoodReads and tumblr, but I am also professionally productive – since my first book came out in 2008, I have written three more, along with a screenplay and a number of newspaper articles. I am also a dad, an occasional volunteer for the charity of which my wife is director, and I have the kind of analogue social life everyone manages when they are the parent of a baby; so aside from whatever moderate brainpower I can bring to bear on this topic, I can speak with the authority of someone who manages their balance of digital and analogue life pretty well.

  I am, for want of a better word, a digital yeti.

  In the late 1950s and early 1960s, when my older brothers were being born, the Defense Advanced Research Projects Agency (DARPA) in the US planted the seed of the modern Internet. The network was constructed to emphasize redundancy and survivability; when I first started looking at the history of the Internet in the 1990s, I read that it had grown from a command and control structure intended to survive a nuclear assault. The 1993 Time magazine piece by Philip Elmer-DeWitt, which was almost the Internet’s coming-out party, cited this origin story alongside John Gilmore’s now famous quote that ‘The Net interprets censorship as damage and routes around it’. Although DARPA itself is unequivocally a military animal, this version of events is uncertain. It seems at least equally possible that the need for a robust system came from the unreliability of the early computers comprising it, which were forever dropping off the grid with technical problems, and that narrative is supported by many who were involved at the time.

  That said, it’s hard to imagine DARPA’s paymasters, in the high days of the Cold War and with a RAND Corporation report calling for such a structure in their hands, either ignoring the need or failing to recognize that a durable system of information sharing and command and control was being created under their noses. For whatever it’s worth, I suspect both versions are true to a point. In either case, the key practical outcome is that the Internet is in its most basic structure an entity that is intended to bypass local blockages. From its inception, the Internet was intended to pass the word, not ringfence it.

  The seed grew slowly; at the start of 1971 there were just fifteen locations connected via the ARPANET (Advanced Research Projects Agency Network). Through the 1970s and 1980s, growth came not from a single point but rather from many; educational networks such as the National Science Foundation Network and commercial ones such as Compuserve met and connected, using the basic protocols established by DARPA so that communication could take place between their users. I remember a friend at school, a maths whizz whose father was an industrial chemist, patie
ntly logging in to a remote system using a telephone modem: you took the handset of your phone and shoved it into a special cradle and the system chirruped grasshopper noises back and forth. Eventually – and it was a long time, because the modem was transmitting and receiving more slowly than a fax machine – a set of numbers and letters in various colours appeared on a television screen. I could not imagine anything more boring. I asked my friend what it was, and he told me he was playing chess with someone on the other side of the world. He had a physical chessboard set up, and obediently pushed his opponent’s piece to the indicated square before considering his next move. Why they didn’t use the phone, I could not imagine.

  Around about the same time, my mother and I went to an exhibition of some sort, and there was a booth where a computer took a picture of you and printed it out, using the letters and numbers of a daisywheel printer, double-striking to get bold text, because the inkjet and the laser printer were still years away. The resulting image was recognizably me, but more like a pencil sketch than a photo. It was considered hugely advanced.

  By the time I arrived at Clare College, Cambridge, in 1991, email was a minor buzzword. There were terminals set up in the library for those who wanted to embrace the digital age. I discovered that a surpassingly pretty English student with whom I was besotted sat up late each night using Internet Relay Chat (the spiritual precursor of modern chat systems such as Skype) to talk to someone whose identity I never established who apparently by turns exasperated and delighted her. I began to think this electronic communications stuff might have something in it, so after some soul searching I got myself a dial-up account with Demon Internet and dived into Usenet, the system of discussion groups which prefigured today’s website forums.