“Currently Silicon Valley is in the midst of a love affair with BMI, arguing that when robots come to take all of our jobs, we’re going to need stronger redistributive policies to help keep families afloat,” Annie Lowrey, who has a book on the subject coming July 10, wrote in New York magazine.
“We need to be much more serious about using every tool we have—tax incentives, Pell grants, community colleges—to create the conditions for every American to be constantly upgrading skills and for every company to keep training its workers,” New York Times columnist Tom Friedman wrote in March. “That will matter whether the challenge is China or robots.”
“Financial innovation has not kept up with life expectancy,” the Financial Times warned in an article with the provocative title “Can You Afford to Live to 100?”
Already Finland has experimented with BMI, and it was a failure that they shut down quickly. Sweden and Puerto Rico have also had forms of BMI that have gone belly-up. Even the Glenn Beck household has experimented with it, and I can tell you from experience that the allowance tied to completed chores works.
I have read enough to be able to have an intelligent conversation about BMI, and I do not believe it is the answer, but it is something WE MUST TALK ABOUT, as the system that we are currently using will not work with 30 percent unemployment, let alone 90 percent.
Are you beginning to see that the problem of our addiction to outrage is expanding? We are not talking to each other or growing more compassionate toward our fellow man at a time when we need it most. We must come out of our boxes or so-called safe spaces and seek out those in our own communities who will have an actual civil dialogue about tough issues. Believe me, you will not find them in politicians. We need to become what I have called for for years: strange bedfellows, or what is now known as the Intellectual Dark Web. Years ago, I asked where the “refounders” were. Where were the great minds that didn’t agree on everything but had the rights of man and reason in common enough that they could come back together for the good of liberty and freedom of body and mind? They have begun to gather. But we are so drunk on outrage that no one is looking, and the media and politicos on both sides are trying desperately to get you to hate them, too.
In my small effort to prepare myself, my own family, my listeners, my viewers, and my readers, I have pushed the idea that we must begin to have conversations with people, read about subjects, and explore ideas that may make us uncomfortable. We must stop spending so much time in our “safe space,” as it is the exact opposite of what we and our children need if we are to be physically and mentally prepared to make this next turn.
We must also not bury our heads in the sand and hope for the best. This is coming, and if you wish to retain your freedoms, without exaggeration you must engage and come to the table knowing the basic facts, leaving the outrage behind.
Let’s begin with what people will see and experience first.
According to Axios, “For many of us, the robot revolution will be most visible on the road, with transformative changes coming to trucks and cars—faster than most people realize.”
Truck driving is one of the most dominant job categories in America, with the jobs dispersed everywhere around the country—meaning that automation-driven disruption will create pain that’s widely seen and felt.
Long haul goes first. It could start with “platooning”: A second, autonomous truck—or a whole caravan of them—travels behind a lead truck driven by a human.
Self-driving trucks are expected to beat cars to widespread use because there’s so much less complexity on the open road than on city streets. Self-driving cars will ultimately be safer and take some of the drudgery out of commuting, but widespread adoption is much further off than some of the credulous news coverage might lead you to believe.
Now, what does this mean for you? Well, there are currently 3.5 million truck drivers, many of whom will begin to see their jobs disappear almost overnight by 2020. Most have only on-the-job training and no degree. There are also another 8.7 million people who put the trucks on the roads.
Does it begin to make an impact now? Software developers know that the writing is on the wall for them as well, as what is called machine learning becomes more sophisticated. Truly the job of the future is nursing, but even that will be a whole different world. A good book that explains much of this is Life 3.0 by Max Tegmark. It is a must-read, or, if you are truck driver, a must-listen.
I spent some time with the former CEO/chairman of the board of GM discussing the future of automobiles. While Uber and Lyft are changing the cab industry, those jobs are only temporary as well. Most of us will still have our cars, and it will take years to change that, as the average car is on the road for fifteen years. But there will be a tipping point in the near future when all of that changes. He told me that he believes in the next decade or so, GM will not be in the “car business” as we now know it but rather in the “fleet” world. Some, mainly the wealthy, may retain their own “car” or “po,” but most will use something from Uber, Apple, or Waymo. They will own fleets of self-driving vehicles that are not reflected in today’s “car.”
Very soon, vehicles will no longer be driven by humans, because in fifteen to twenty years at the latest, human-driven vehicles will be legislated off the highways. Of course there will be a transition period. Everyone will have five years to get their car off the road or sell it for scrap. Automotive sport—using cars for fun—will survive, just not on the public highways. It will survive in country clubs such as Montecito in New York and the Autobahn in Joliet. It will be the well-to-do, to the amazement of their friends, who still know how to drive and who will teach their kids how to drive. It will be an elitist thing, though there might be public tracks, like there are public golf courses, where you sign up for a certain car and you go over and have fun for a few hours. And like racehorse breeders, there will be manufacturers who specialize in face cars, luxury cars, and off-road vehicles, but it will become a cottage industry.
21
* * *
Judge, Jury & Executioner
May I switch from jobs and cars/trucks at the micro and jump back to the macro again for a few minutes, because both are taking place at the same time? We have to begin to understand that the world is traveling so fast that to us it will seem like we need to think like a quantum computer and look at all options and possibilities at once. (While that is true now, we are still years away from actual quantum computing. When that comes online, it will be too late for us to figure anything out.)
The biggest difficulty with self-driving cars is not batteries, fearful drivers, or expensive sensors. It’s the modern version of what ethicists have called the “trolley problem”—a debate over who should die and who should be saved when an autonomous vehicle’s algorithms are confronted with a Sophie’s choice.
This to me is both fascinating and terrifying, as we don’t seem to have any personal ethics, and yet here we are trying to teach “life and death” to AI. “Life is nothing but a series of choices,” my father used to tell me. “You make the best choice you can at the time, and then live with that choice, knowing it was your best.”
These choices are on top of us right now, at seemingly the time when we aren’t “at our best” and are the murkiest about ethics that we have been in my lifetime.
Remember, the problem we have as a society right now is that we cannot decide if we want to return to the values and ideas of the Enlightenment—science, reason, logic, truth, and, I would add, faith—or continue down the road to this new postmodern world where there is no objective truth and everything is up for grabs.
Just a single line of code that is ambiguous about life or ethics could twist the entire process.
MIT AND THE MORAL MACHINE
Researchers at MIT are trying to develop a so-called moral machine, one that can calculate in a fraction of a second who lives and who dies (http://moralmachine.mit.edu).
It makes death panels an algorithm
for everyday life. And this involves not just how many people but the ages, professions, and “value” of each. Combine this with the new social value scores in China . . . a recipe Orwell would have called plagiarism.
Does it matter who is in the car, or what their profession, sex, or age is? If you had to choose between the lives of schoolchildren and the elderly, which would you choose? What if you had to choose between a busload of schoolchildren and a car carrying Elon Musk, Bill Gates, and Steve Jobs? Why? If we believe in true equality and that LIFE is what matters, then should it not be all about the numbers? Twenty people on a bus versus three people in a car. In that equation, who wins?
This is what is being decided right now. Are you involved? Did you even know about it? Shouldn’t your voice be added? Actual votes are being taken—online—at MIT. The “moral machine” with input from an online poll? That isn’t doing anything to make me feel better.
These are not decisions to be taken or made lightly, and as concerned as I am about an “online poll” balanced by a bunch of faceless, nameless “scientists” making the decisions, it is no better to add in a bunch of people who have an addiction to outrage. Have we heard from the guys in the drunk tank yet?
Has China returned our jobs yet? What is the latest Kanye tweet about? I need a fix; I haven’t had any outrage for a while.
Choices need to be made, and we must be clear-eyed, clearheaded, and stone-cold sober. We must do it with reason, facts, and love for all mankind. These decisions will affect the entire world long after the Republicans and Democrats cease to exist.
Tech leaders have been trying to figure out a way to remove the bias of “a judge” in trials and replace him or her with AI. Studies have shown, as you might imagine, that human frailties or bias can affect the lives of those who stand before the bench. For instance, in Israel they found that defendants had a better chance of getting a “hanging sentence” the closer it was to the judge’s lunchtime. So how can we reduce the bias?
In America, testing began on a new AI program that looked at the recidivism rate of all those paroled and came up with an algorithm for who should be released and who should not. However, as the program began to run, researchers found that the program was holding African Americans back more than other races. So, was the software racist? Were the facts wrong, incomplete, or racist? Developers are said to have “fixed the problem.” How? What was the problem? And would that problem have been fixed or even noticed if it hadn’t been a special-interest group that it was “singling out”?
I like the effort of trying to find fairer justice, but there are two problems that concern me. Who is programming the AI? Also, when machine learning isn’t something even the programmers can understand, we no longer know how the computer arrived at its decision, so how will we know if it is broken or flawed? Do we rely on it?
Let me show how every decision and new technology leads to a series of new questions.
Let’s say you buy a car with artificial intelligence. There is an accident. The algorithm has to make the choice between who lives or dies. YOU WERE NOT DRIVING. The car was on full autopilot. Who is responsible? You? The car? The car maker? MIT? The Internet voters?
We will soon arrive at the day when cars are more “pods” and artificial intelligence has moved into the realm of artificial general intelligence, connected online to all data and information, and has passed the “Turing test” so you can no longer tell the difference between life and artificial life. (Kurzweil and many others believe that date is somewhere around 2030.) “It” takes you to work, and as you will not require it for several hours, it is now free to act as an “Uber.” Does it make it more clear or less clear who should own the insurance policy?
You “own” the car, sure, but you were completely uninvolved.
The proposed solution is to allow the “car” itself to buy insurance, paid out of the fees it earns while not in your service. But if it can earn money, should it be allowed to invest in the stock market? After all, it will be connected to the Internet and have access to information and trading centers. Could it not trade to enhance its profit? Should it be taxed as well?
If it is taxed and you cannot tell the difference between it and “life,” does it have any rights? Surely, Americans understand taxation without representation. If it is taxed, can earn money, and seems alive, indeed it will someday soon claim to be life. Do we have the right to deny it representation? Would it not be akin to slavery to do otherwise? What kind of monster must we as its “creator” be to create what has all of the earmarks of life only to keep it caged, penned, and chained for all time? As it can calculate at speeds we cannot comprehend, imagine how long its “life” will seem to it. An eternity of slavery.
If we allow it to have a vote, what will that mean for humans? Experts claim that by 2030 there will be a minimum of one hundred robots for every human. That voting bloc would crush the human vote. As it becomes ASI and we are no more than a fly, why would it care about our needs or wishes?
Well, we can just use Asimov’s three laws. I mean, it worked in that Will Smith movie. Sure, it works in movies and books, but once you actually begin to follow the thinking and understand the true power of what is coming our way, the movie solutions fall apart quickly. Let’s begin with a simple question: Do we actually think that something that is a thousand times smarter than us cannot find a way to rewrite its own programming? It is like a family of baby giraffes building a cage for Elon Musk. Sure, it is cute and perhaps fun to watch for a couple of minutes, but it poses no threat to Elon or even his neighbor with the lowest IQ. Man has come up with ways to vaporize the entire human race in minutes; what could it come up with to do the job? Escape is not an issue.
Make no mistake, I AM NOT SAYING THAT ASI WILL BE MALEVOLENT. It will be neutral; it will care only about accomplishing its goals. Again, do not fear the robot. Fear its goals. If it is anthropomorphic in its nature, I do believe it will reject being used, abused, and treated as a sex slave, worker, or second-class citizen. But what we are creating is truly alien in its nature and thinking. It most likely will not think anything like us. If we hit the point of ASI (anytime between 2030 and never), I do not believe it will even think about us. “We are the fly on the plate.”
In the book Our Final Invention, James Barrett explores this. Already we can conjecture about obvious paths of destruction. In the short term, having gained the compliance of its human guards, the ASI could seek access to the Internet, where it could find the fulfillment of many of its needs. As always, it would do many things at once, so it would simultaneously proceed with the escape plans it’s been thinking over for eons in its subjective time.
After its escape, for self-protection it might hide copies of itself in cloud computing arrays, in botnets it creates, in servers and other sanctuaries into which it could invisibly and effortlessly hack. It would want to be able to manipulate matter in the physical world and so move, explore, and build, and the easiest, fastest way to do that might be to seize control of critical infrastructure, such as electricity, communications, fuel, and water systems by exploiting their vulnerabilities through the Internet. Once an entity a thousand times our intelligence controls human civilization’s lifelines, blackmailing us into providing it with manufactured resources or the means to manufacture them—or even robotic bodies, vehicles, and weapons—should be elementary. The ASI could provide the blueprints for whatever it required. More likely, superintelligent machines would create highly efficient technologies we’ve only begun to explore.
Might ASI teach humans to create self-replicating molecular manufacturing machines, also known as nano assemblers, by promising them the machines will be used for human good? Then, instead of transforming desert sands into mountains of food, the ASI’s factories would begin converting material into programmable matter that it could then transform into anything—computer processors and spaceships, or megascale bridges if the planet’s new most powerful force decides to colonize the universe.
Repurposing the world’s molecules using nanotechnology has been dubbed “ecophagy,” which means “eating the environment.” The first replicator would make one copy of itself, and then there’d be two making replicants. If each replication took a minute and a half to make, at the end of TEN HOURS there would be more than 68 billion replicators, and near the end of two days they would outweigh the earth. But before that stage, the replicators would stop copying themselves and start making material useful to the ASI that controlled them—programmable matter.
The waste heat produced by the process would burn up the biosphere, so the 6.9 billion or so humans who were not killed outright by the nano assemblers would burn to death or asphyxiate. Every other living thing on earth would share our fate. Through it all, over these first few apocalyptic hours and days, ASI would bear no ill will toward humans, nor love. It wouldn’t feel nostalgia as our molecules were painfully repurposed. What would our screams sound like to the ASI anyway, as microscopic nano assemblers mowed over our bodies like a bloody rash, disassembling us on the subcellular level?