42. The Scottish Parliament building. Our sterling did not die in vain.
42.© Jeremy Sutton-Hibbert/Getty Images.
We see then that the self too is an imaginary story, just like nations, gods and money. Each of us has a sophisticated system that throws away most of our experiences, keeps only a few choice samples, mixes them up with bits from movies we’ve seen, novels we’ve read, speeches we’ve heard, and daydreams we’ve savoured, and out of all that jumble it weaves a seemingly coherent story about who I am, where I came from and where I am going. This story tells me what to love, whom to hate and what to do with myself. This story may even cause me to sacrifice my life, if that’s what the plot requires. We all have our genre. Some people live a tragedy, others inhabit a never-ending religious drama, some approach life as if it were an action film, and not a few act as if in a comedy. But in the end, they are all just stories.
What, then, is the meaning of life? Liberalism maintains that we shouldn’t expect some external entity to provide us a ready-made meaning. Rather, each individual voter, customer and viewer ought to use his or her free will in order to create meaning, not just for his or her life but for the entire universe.
The life sciences, however, undermine liberalism, arguing that the free individual is just a fictional tale concocted by an assembly of biochemical algorithms. Every moment the biochemical mechanisms of the brain create a flash of experience, which immediately disappears. Then more flashes appear and fade, appear and fade, in quick succession. These momentary experiences do not add up to any enduring essence. The narrating self tries to impose order on this chaos by spinning a never-ending story, in which every such experience has its place, and hence every experience has some lasting meaning. But, as convincing and tempting as it may be, this story is a fiction. Medieval crusaders believed that God and heaven provided their lives with meaning; modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.
Doubts about the existence of free will and individuals are nothing new, of course. More than 2,000 years ago thinkers in India, China and Greece argued that ‘the individual self is an illusion’. Yet such doubts don’t really change history much unless they have a practical impact on economics, politics and day-to-day life. Humans are masters of cognitive dissonance, and we allow ourselves to believe one thing in the laboratory and an altogether different thing in the courthouse or in parliament. Just as Christianity didn’t disappear the day Darwin published On the Origin of Species, so liberalism won’t vanish just because scientists have reached the conclusion that there are no free individuals.
Indeed, even Richard Dawkins, Steven Pinker and the other champions of the new scientific world view refuse to abandon liberalism. After dedicating hundreds of erudite pages to deconstructing the self and the freedom of will, they perform breathtaking intellectual somersaults that miraculously land them back in the eighteenth century, as if all the amazing discoveries of evolutionary biology and brain science have absolutely no bearing on the ethical and political ideas of Locke, Rousseau and Jefferson.
However, once the heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we – or our heirs – will probably require a brand-new package of religious beliefs and political institutions. At the beginning of the third millennium liberalism is threatened not by the philosophical idea that ‘there are no free individuals’, but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Will democracy, the free market and human rights survive this flood?
9
The Great Decoupling
The preceding pages took us on a brief tour of recent scientific discoveries that undermine the liberal philosophy. It’s time to examine the practical implications of these discoveries. Liberals uphold free markets and democratic elections because they believe that every human is a uniquely valuable individual, whose free choices are the ultimate source of authority. In the twenty-first century three practical developments might make this belief obsolete:
1.Humans will lose their economic and military usefulness, hence the economic and political system will stop attaching much value to them.
2.The system will continue to find value in humans collectively, but not in unique individuals.
3.The system will still find value in some unique individuals, but these will constitute a new elite of upgraded superhumans rather than the mass of the population.
Let’s examine all three threats in detail. The first – that technological developments will make humans economically and militarily useless – will not prove that liberalism is wrong on a philosophical level, but in practice it is hard to see how democracy, free markets and other liberal institutions can survive such a blow. After all, liberalism did not become the dominant ideology simply because its philosophical arguments were the most valid. Rather, liberalism succeeded because there was abundant political, economic and military sense in ascribing value to every human being. On the mass battlefields of modern industrial wars and in the mass production lines of modern industrial economies, every human counted. There was value to every pair of hands that could hold a rifle or pull a lever.
In the spring of 1793 the royal houses of Europe sent their armies to strangle the French Revolution in its cradle. The firebrands in Paris reacted by proclaiming the levée en masse and unleashing the first total war. On 23 August, the National Convention decreed that ‘From this moment until such time as its enemies shall have been driven from the soil of the Republic, all Frenchmen are in permanent requisition for the services of the armies. The young men shall fight; the married men shall forge arms and transport provisions; the women shall make tents and clothes and shall serve in the hospitals; the children shall turn old lint into linen; and the old men shall betake themselves to the public squares in order to arouse the courage of the warriors and preach hatred of kings and the unity of the Republic.’1
This decree sheds interesting light on the French Revolution’s most famous document – The Declaration of the Rights of Man and of the Citizen – which recognised that all citizens have equal value and equal political rights. Is it a coincidence that universal rights were proclaimed at the precise historical juncture when universal conscription was decreed? Though scholars may quibble about the exact relations between them, in the following two centuries a common argument in defence of democracy explained that giving citizens political rights is good, because the soldiers and workers of democratic countries perform better than those of dictatorships. Allegedly, granting political rights to people increases their motivation and their initiative, which is useful both on the battlefield and in the factory.
Thus Charles W. Eliot, president of Harvard from 1869 to 1909, wrote on 5 August 1917 in the New York Times that ‘democratic armies fight better than armies aristocratically organised and autocratically governed’ and that ‘the armies of nations in which the mass of the people determine legislation, elect their public servants, and settle questions of peace and war, fight better than the armies of an autocrat who rules by right of birth and by commission from the Almighty’.2
A similar rationale favoured the enfranchisement of women in the wake of the First World War. Realising the vital role of women in total industrial wars, countries saw the need to give them political rights in peacetime. Thus in 1918 President Woodrow Wilson became a supporter of women’s suffrage, explaining to the US Senate that the First World War ‘could not have been fought, either by the other nations engaged or by America, if it had not been for the services of women – services rendered in every sphere – not only in the fields of effort in which we have been accustomed to see them work, but wherever men have worked and upon the very skirts and edges of the battle itself. We shall not only be distrusted but shall deserve to be distr
usted if we do not enfranchise them with the fullest possible enfranchisement.’3
However, in the twenty-first century the majority of both men and women might lose their military and economic value. Gone is the mass conscription of the two world wars. The most advanced armies of the twenty-first century rely far more on cutting-edge technology. Instead of limitless cannon fodder, countries now need only small numbers of highly trained soldiers, even smaller numbers of special forces super-warriors and a handful of experts who know how to produce and use sophisticated technology. Hi-tech forces ‘manned’ by pilotless drones and cyber-worms are replacing the mass armies of the twentieth century, and generals delegate more and more critical decisions to algorithms.
Aside from their unpredictability and their susceptibility to fear, hunger and fatigue, flesh-and-blood soldiers think and move on an increasingly irrelevant timescale. From the days of Nebuchadnezzar to those of Saddam Hussein, despite myriad technological improvements, war was waged on an organic timetable. Discussions lasted for hours, battles took days, and wars dragged on for years. Cyberwars, however, may last just a few minutes. When a lieutenant on shift at cyber-command notices something odd is going on, she picks up the phone to call her superior, who immediately alerts the White House. Alas, by the time the president reaches for the red handset, the war has already been lost. Within seconds a sufficiently sophisticated cyber strike might shut down the US power grid, wreck US flight control centres, cause numerous industrial accidents in nuclear plants and chemical installations, disrupt the police, army and intelligence communication networks – and wipe out financial records so that trillions of dollars simply vanish without a trace and nobody knows who owns what. The only thing curbing public hysteria is that, with the Internet, television and radio down, people will not be aware of the full magnitude of the disaster.
On a smaller scale, suppose two drones fight each other in the air. One drone cannot open fire without first receiving the go-ahead from a human operator in some distant bunker. The other is fully autonomous. Which drone do you think will prevail? If in 2093 the decrepit European Union sends its drones and cyborgs to snuff out a new French Revolution, the Paris Commune might press into service every available hacker, computer and smartphone, but it will have little use for most humans, except perhaps as human shields. It is telling that already today in many asymmetrical conflicts the majority of citizens are reduced to serving as shields for advanced armaments.
43. Left: Soldiers in action at the Battle of the Somme, 1916. Right: A pilotless drone.
43.Left: © Fototeca Gilardi/Getty Images. Right: © alxpin/Getty Images.
Even if you care more about justice than victory, you should probably opt to replace your soldiers and pilots with autonomous robots and drones. Human soldiers murder, rape and pillage, and even when they try to behave themselves, they all too often kill civilians by mistake. Computers programmed with ethical algorithms could far more easily conform to the latest rulings of the international criminal court.
In the economic sphere too the ability to hold a hammer or press a button is becoming less valuable than before, which endangers the critical alliance between liberalism and capitalism. In the twentieth century liberals explained that we don’t have to choose between ethics and economics. Protecting human rights and liberties was both a moral imperative and the key to economic growth. Britain, France and the United States allegedly prospered because they liberalised their economies and societies, and if Turkey, Brazil or China wanted to become equally prosperous, they had to do the same. In many if not most cases it was the economic rather than the moral argument that convinced tyrants and juntas to liberalise.
In the twenty-first century liberalism will have a much harder time selling itself. As the masses lose their economic importance, will the moral argument alone be enough to protect human rights and liberties? Will elites and governments go on valuing every human being even when it pays no economic dividends?
In the past there were many things only humans could do. But now robots and computers are catching up, and may soon outperform humans in most tasks. True, computers function very differently from humans, and it seems unlikely that computers will become humanlike any time soon. In particular, it doesn’t seem that computers are about to gain consciousness and start experiencing emotions and sensations. Over the past half century there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their economic value because intelligence is decoupling from consciousness.
Until today high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns.
Science fiction movies generally assume that in order to match and surpass human intelligence, computers will have to develop consciousness. But real science tells a different story. There might be several alternative ways leading to super-intelligence, only some of which pass through the straits of consciousness. For millions of years organic evolution has been slowly sailing along the conscious route. The evolution of inorganic computers may completely bypass these narrow straits, charting a different and much quicker course to super-intelligence.
This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just an amusing pastime for philosophers. But in the twenty-first century this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
Armies and corporations cannot function without intelligent agents, but they don’t need consciousness and subjective experiences. The conscious experiences of a flesh-and-blood taxi driver are infinitely richer than those of a self-driving car, which feels absolutely nothing. The taxi driver can enjoy music while navigating the busy streets of Seoul. His mind may expand in awe as he looks up at the stars and contemplates the mysteries of the universe. His eyes may fill with tears of joy when he sees his baby girl taking her very first step. But the system doesn’t need all that from a taxi driver. All it really wants is to bring passengers from point A to point B as quickly, safely and cheaply as possible. And the autonomous car will soon be able to do that far better than a human driver, even though it cannot enjoy music or be awestruck by the magic of existence.
We should remind ourselves of the fate of horses during the Industrial Revolution. An ordinary farm horse can smell, love, recognize faces, jump over fences and do a thousand other things far better than a Model T Ford or a million-dollar Lamborghini. But cars nevertheless replaced horses because they were superior in the handful of tasks that the system really needed. Taxi drivers are highly likely to go the way of horses.
Indeed, if we forbid humans to drive not only taxis but vehicles altogether, and give computer algorithms a monopoly over traffic, we can then connect all vehicles to a single network, thereby rendering car accidents far less likely. In August 2015, one of Google’s experimental self-driving cars had an accident. As it approached a crossing and detected pedestrians wishing to cross, it applied its brakes. A moment later it was hit from behind by a sedan whose careless human driver was perhaps contemplating the mysteries of the universe instead of watching the road. This could not have happened if both vehicles had been guided by interlinked computers. The controlling algorithm would have known the position and intentions of every vehicle
on the road, and would not have allowed two of its marionettes to collide. Such a system would save lots of time, money and human lives – but would also eliminate the human experience of driving a car and tens of millions of human jobs.4
Some economists predict that sooner or later, unenhanced humans will be completely useless. Robots and 3D printers are already replacing workers in manual jobs such as manufacturing shirts, and highly intelligent algorithms will do the same to white-collar occupations. Bank clerks and travel agents, who a short time ago seemed completely secure from automation, have become endangered species. How many travel agents do we need when we can use our smartphones to buy plane tickets from an algorithm?
Stock-exchange traders are also in danger. Most financial trading today is already being managed by computer algorithms that can process in a second more data than a human can in a year and can react to the data much faster than a human can blink. On 23 April 2013, Syrian hackers broke into Associated Press’s official Twitter account. At 13:07 they tweeted that the White House had been attacked and President Obama was hurt. Trade algorithms that constantly monitor newsfeeds reacted in no time and began selling stocks like mad. The Dow Jones went into free fall and within sixty seconds lost 150 points, equivalent to a loss of $136 billion! At 13:10 Associated Press clarified that the tweet was a hoax. The algorithms reversed gear and by 13:13 the Dow Jones had recuperated almost all the losses.
Three years earlier, on 6 May 2010, the New York stock exchange underwent an even sharper shock. Within five minutes – from 14:42 to 14:47 – the Dow Jones dropped by 1,000 points, wiping out $1 trillion. It then bounced back, returning to its pre-crash level in a little more than three minutes. That’s what happens when super-fast computer programs are in charge of our money. Experts have been trying ever since to understand what happened in this so-called ‘Flash Crash’. They know algorithms were to blame, but are still not sure exactly what went wrong. Some traders in the USA have already filed lawsuits against algorithmic trading, arguing that it unfairly discriminates against human beings who simply cannot react fast enough to compete. Quibbling whether this really constitutes a violation of rights might provide lots of work and lots of fees for lawyers.5