Part of the problem is that we have few models in mainstream cultural life that interpret the way the world is, what it may become, or how we arrived at this point, with reference to science. The focus of culture – of theatre, fiction and art – is on personal, interior journeys and emotional and moral truths. These are what might be termed ‘eternal human stories’. It’s odd, but the science that played its part in bringing us here, and the technologies and ethos that go along with it and which create the world we inhabit, are oddly unrepresented both in these eternal stories and in public intellectual discourse. The classical Greek myth of Prometheus, the thief of fire, and the story of Icarus, which is in some ways the counterpart narrative, are re-told from time to time, but almost always as tragedies of over-reaching. Science is the place from which trouble comes; solutions derive from the human heart, with its capacity to balance the excesses of the brain.

  In government, the situation is if anything worse. Science – providing as best it can statements of truth – is but one part of a decision-making process that must also satisfy or at least take into account the wild and inaccurate received positions of MPs and pressure groups. The tired anti-sex rhetoric of religious conservatives shades inevitably into a stout-hearted denial of compelling evidence that sex education reduces STDs and teen pregnancy, and for some unfathomable reason this denial of reality is not grounds for de-selection but for celebration. In the twenty-first century we still have elected leaders who choose policy on the basis of what they wish were true rather than what is known to be so. I’ve heard this called ‘policy-based evidence-making’; our politics, endlessly negotiating and compromising, has no space for the exigencies of the scientific world, so science often seems to carry the can, on the one hand for having the temerity to report facts that are unwanted, and on the other for generating technology that proves – as any widely adopted and significant technology must do – disruptive.

  To make things worse, much of the scientific world is apprehended only by cognition. The truths of atomic structure and gravitation are not perceptible to human senses. Our natural understanding of the universe occurs at the human scale, where objects are made of solid chunks of matter, heat is heat rather than molecular vibration, and the sun rises (rather than the Earth rotating to reveal our star where it has always been in relation to our planet). Einstein’s world, in which our velocity affects our mass and the flow of time around us is different from that around a body moving more slowly, is a strangeness almost no one considers in their day-to-day lives. It simply makes no sense, so we don’t see it. The weirdness of quantum theory, in which information appears to travel backwards through time and a cat may be both alive and dead until an observation is made, is – in so far as it intrudes on our notice at all – a fictional device, a thought experiment or an opportunity for humour.

  And yet, projects to construct the first quantum computers are under way even now, and so far seem likely to succeed. If they do, the world will change again, as processing becomes ridiculously rapid and previously imponderable mathematical problems can be dealt with in minutes. The practical upshots will be an end to current methods of cryptography – which are used to secure everything from credit card transactions and diplomatic communications to air traffic control – and a huge boost to biological and medical research, not to mention physics. Climate modelling will get better, or at least faster. The list of things we cannot do will once more get shorter. And yet, almost no one is thinking about it, or, at least, not aloud. Has the Department of Health considered the budgetary implications? Has the Chancellor discussed the issue with the Governor of the Bank of England? If they have, they surely have not done so publicly. Why not? When these developments happen – if they do – the results will shunt us into another series of shifts in the way the world works, and we’ll have to adjust. It might help to see them coming up over the horizon.

  They don’t talk about them, because we as a society are unprepared for the discussion. Where for a while no one could be considered well-educated without a grounding in mathematics as well as literature, biology as well as music, some time around the early 1900s the perception changed – at least in the UK. F.R. Leavis, in reviewing H.G. Wells, argued that Wells should be considered a portent, a type, rather than a proper writer. Leavis also pre-echoed part of today’s angst about information technology: ‘the efficiency of the machinery becomes the ultimate value, and this seems to us to mean something very different from expanding and richer human life.’ That distinction is in my view fundamental to the discussion here: Leavis makes a separation between machinery, and by implication mechanisms and logic, and ‘richer human life’ which is achieved elsewhere.

  The writer and physicist C.P. Snow retaliated that the mainstream intellectual culture was ‘behaving like a state whose power is rapidly declining’. The mood got worse from there, with Snow asserting that there was a growing schism between ‘feline’ literary culture – which he felt was redefining the term ‘intellectual’ to exclude figures like Ernest Rutherford (generally considered the father of nuclear physics) but include T.S. Eliot – and scientific culture, which was ‘heroic’, and heterosexual. Leavis replied that Snow’s pontifical tone was such that ‘while only genius could justify it, one cannot readily think of genius adopting it’. He went on to clarify, in case any scientists in the room might have missed the point, that he considered Snow as ‘intellectually undistinguished as it is possible to be’.

  Leaving aside Snow’s evident homophobia as an ugly aspect of his time and a sorry anticipation of the hounding of Alan Turing after the Second World War, the spat has resonance today. The present literary establishment’s relationship with science is profoundly uncomfortable, and literary fiction predicated on science is rare, perhaps because any that touches upon science is liable to be reclassified as science fiction, and therefore not ‘intellectual’. The Time Machine, Brave New World and 1984 are all strikingly important novels, and all of them are pretty clearly science fiction, but it can be hard to get anyone to acknowledge that out loud. The science fiction aspect is generally dismissed as ‘the least important part’. Time has washed them, acknowledged importance has removed the uncomfortable trace of genre. And try telling anyone that Cold Comfort Farm – a novel written in 1932 about a near future some time after a 1946 war with Nicaragua, in which everyone communicates by video phone – is science fiction. Most people I talk to about it don’t remember the setting at all; it’s as if it just can’t possibly be there, so it never was.

  When Jeanette Winterson wrote a novel with elements that could be tracked as science fiction, she had to fight a species of rearguard action against mutterings of uncertainty and disapproval, giving a rationale for including these taboo topics. She told New Scientist magazine in 2007:

  I hate science fiction. But good writers about science, such as Jim Crace or Margaret Atwood, are great. They take on science because it’s crucial to our world, and they use language to give energy to ideas. But others just borrow from science and it ends up like the emperor’s new clothes, with no understanding of the material. But you shouldn’t fake it because science is too important, it’s the basis for our lives. I expect a lot more science in fiction because science is so rich.

  Which sounds to me rather severe: the element of play, of wonder, that characterizes much science fiction and which brings science into the living world rather than making it something that can only be observed at a great distance, is missing.

  Consider this rather different perspective: the writer Neil Gaiman, as a guest of China’s largest state-approved science fiction convention, wondered aloud why China had changed its mind about a genre it previously discouraged. (Science fiction, among its many other evils, has long been a way for cheeky dissidents in any country to express political, social and sexual ideas that would otherwise get them locked up.) Gaiman was told that China had researched the innovation powerhouses of the United States, and discovered that the common factor among al
l the companies of note in the technological arena was simple: people in those outfits read and were inspired by science fiction. So now China was encouraging its own people to read it, too, in order to become a creator of new technology rather than just an industrial powerhouse turning out tech products for the United States.11

  The dispute doesn’t begin with Leavis and Snow, of course; it’s the clash of two competing interpretations of life. On the one hand, you have the Romantic movement, which is fundamentally mystical and seeks meaning in peak experiences and considers all that is important in life to be poetic and irreducible. On the other, you have the Enlightenment, which believed everything would eventually be explained by science and reason, and promised a world founded upon clearly understandable principles of rational thought. Neither church has ever been able to deliver entirely, and the present situation is a typically modern compromise, a kind of patchwork in which both sides achieve primacy in a circumscribed arena: politics and daily life are generally governed, in those regions where the influence of these competing ideas is felt, by a sort of watered-down rationalism that is most pragmatic, and which makes room for anti-scientific balderdash if it appeals to the popular perception. Appeals to idealism – a Romantic trait – are shrugged off as impracticable and naïve so that business may continue as usual. Culture, meanwhile, is owned by the mystical Romantic thread, suitably embellished with borrowings from psychoanalysis and science where appropriate, but still fundamentally touting a notion that some experience cannot be codified, it must be lived, and any attempt to replicate it is not only doomed to failure but more importantly a fundamental failure to understand the world.

  And yet so often, our majority culture doesn’t talk about the sciences at all, seeing them as an irrelevance at best, and a distraction from real human truth at worst. This is a fundamental error. We as human beings are not separate from our tools or the environment we make with them. We are not separate from one another. We are individuals, yes, but individuals defined in part by our relationships with others and with what is around us. The investigation of the inner self is vital, but it is not comprehensive as a statement of who and what we are. We need to learn to speak the language of science and follow its logic, to incorporate it into our understanding of what is real and above all what is meaningful. It is definitive of our world, like it or not, unless we intend to drift back to pre-Pasteur medievalism and die at forty with no teeth. It is part of the human condition, in some ways definitive of us as creatures, that we reshape our environment, that we seek understanding of the universe – for control, yes, but also as part of who we are. We make our world, and any discourse of culture that ignores that aspect of us is as false as one that affords no importance to the interior life.

  Which leaves my second question: why do some people react to any suggestion that the Internet and its related technologies may not be an a priori good as if it were a violent attack?

  blindgiant.co.uk/chapter4

  PART II

  5

  Work, Play and Sacred Space

  IF OUR TROUBLED relationship with science is partly to blame for the willingness of some to project the modern sense of confusion on to devices that emerged after that confusion had already settled upon us, what about the almost religious zeal with which others defend digital technology? The answer to that question is actually more interesting to me, because I think it goes to the heart of the Internet’s role in the human world and the relationship that currently exists – as well as the one we desperately need to forge – between ourselves, our society and our tools.

  Between 1980 and the millennium, the Internet became a play space, an ‘anarchistic electronic freeway’. Looking once more at the 1993 Time article, it’s noticeable that both of the things mentioned by Glee Willis – family and sex – are private matters, things that belong to the home and the individual, not the state. They are aspects of the hearth, the personal space I discussed earlier, governed not by sternly codified laws or regulations (unless something goes very, very wrong) but by feelings of natural fairness, desire and emotional reciprocity. They are both venues for relaxation and non-cognitive satisfaction: for immanent, biological living. You could argue that that kind of living is what the hearth is, or is for, and the first online communities retained that ethos. They were, in the philosophical sense, naïve: they were unconsidered, did not spend a huge amount of time examining their own meaning. They were just made up of people living, sharing experiences, helping one another, falling in love, rowing and fighting, and so on. In other words, many of the first colonists of the digital realm – those who arrived just after the frontiersmen from MIT – weren’t there for professional reasons. They did not erect a shopfront, because there was no one to sell to. They were homesteaders, and they extended the hearth into the online world, and they did so mostly not for intellectual interest, but because it was fun. It was a strange new thing, and they went about it playfully.

  ‘Play’ is a small word that describes a very big concept. Some of the time it denotes something children do more than adults, an unstructured babble of changing fantasies and improbable imaginings. In fact, we traditionally define the arrival of adulthood as the end of freedom to play, which can make the conventional education system into the slow banishment of creativity, as the urge to turn ideas and wisdoms upside down and shake them is cut away and replaced with homogenized thinking. But play is much more than simply what you do to pass the time while you’re waiting to grow up – and it’s more than just a disguised form of learning, too. Renowned Dutch historian Johan Huizinga asserted all culture was partly a form of play, and enumerated a number of qualities that he felt play possessed, among them that it is separate from the everyday ‘real life’ both in location and in duration, and that it is not connected with material reward. The digital environment initially met both of these criteria, and even now many of the activities that enliven it – social media sites, blogs, games and user-generated content on YouTube – are free.

  Huizinga is not the only one to place great importance on play. Psychoanalyst Jacques Lacan and critic Roland Barthes both used the concept of jouissance (‘enjoyment’) to denote something somewhat similar, though jouissance has more than a hint of the erotic: the word also means ‘orgasm’. Karl Marx and Ayn Rand (an alarming pairing) both proposed that the basis of unhappiness and iniquity in human society was the subversion or appropriation of the creative urge by malign entities – although they both characterized that creative urge as an urge to work rather than to play. For Marx, looking at the working conditions of the late nineteenth century, ‘malign forces’ meant capitalism. For Rand, a refugee from Soviet Russia, they meant socialism. Both urged forms of revolution as a proportionate response to the violation of the fundamental human need to create. Creativity appears to span the gap between working and playing – or, rather, it seems that creation as an activity is not interested in the final fate of the product. More, both work and play can drop into a focused freedom of the mind, what psychologist Mihaly Csikszentmihalyi would call a ‘flow state’ in which labour becomes a function of identity and an expression of it: a route to contentedness through a state somewhere between meditation and intense concentration.

  The Internet was staked out early as a play space, a place where there was no need for the conventional rules of society because there were no physical consequences to what happened there. Safety was guaranteed, because the only thing happening online was words. That being the case, the entire digital enterprise could be governed by nothing more stringent than guidelines. Free speech was assumed. The whole concept was an experiment, an opportunity to do things right. The Occupy camps around the world, with their group decision-making, quasi-collectivism and barter culture, are drawing on the same ideals. It’s not just protest, it is an actual, simple attempt to organize society in a different way.

  The hearth space, with its uncodified rules and informal ethos, is set against the professional world outside the home, whe
re the rules are made to govern not a single family but every family. Laws are an attempt to set down justice in a form that can predictably be applied across thousands of non-identical cases, to counter patronage and favouritism. Professional personas, meanwhile, attempt a similar thing: one person functioning as a tax inspector is supposed to be identical in effect to another. There should be no difference: the identity of the individual is submerged beneath the role. The same is true in a corporate situation: ideally, an employer wants to be able to send any given employee to perform a particular task for which they are qualified and know that the result will be the same. The space outside the hearth is owned by systems – interlocking collections of rules performing the functions of government and commerce, acted by human beings. Balancing the demands of these two worlds is how most of us spend our lives: making sure we spend enough time working to sustain our home lives; making sure we spend enough time with our home lives to maintain them and enjoy them while not losing our jobs.

  The playfulness of the Internet, of course, remains to this day. YouTube videos made for fun (often to a very high standard) and LOLcats proliferate. Interesting to me as an author is the playfulness of language that has evolved out of digital technology: the variations of English that have come out of the new media are often zesty references to typing errors that occur when you’re trying to play and type on the same keyboard. My favourite is the verb ‘to pwn’. It means ‘to rule’ or ‘to achieve a crushing victory’ (appropriately, since Huizinga wrote extensively about the play of chivalric conflict). It has a sense of utterly unashamed jubilation, even gloating, but it’s also used ironically, with a knowing nod to how silly it is. It has both transitive and intransitive forms and obviously isn’t intended to be said aloud. It’s a typing joke, inaccessible to the ear. In fact, it relies on the layout of the QWERTY keyboard. The evolution of the word, I think, is relatively straightforward: typing quickly, ‘I won’ becomes ‘I own’. A new use of ‘own’ arises, meaning ‘to win with extreme prejudice’. A further slip of the finger generates ‘I pwn’. What’s significant is that it has been adopted – infused with lexicographic life.