Features
The Future Of Humanity With Yuval Noah Harari

It might not feel like it in 2016, a year of terrorist atrocities and political uncertainty, but the human race is actually doing pretty well, says Yuval Noah Harari. The author and historian best known best for ‘Sapiens: A Brief History of Humankind’ kicks off his worldwide book tour for ‘Homo Deus: A Brief History of Tomorrow’ with an interview with the BBC’s Kamal Ahmed, hosted by Intelligence Squared.

Harari maintains that in the grand tapestry of human history, we are actually well on our way to conquering the three greatest threats to our survival: famine, plague and war. For the first time, more people are dying from eating too much than too little, more people are dying from old age than from infectious diseases, and the death toll from war crimes and terrorism is on its way down. In fact, according to Harari, statistically speaking, “McDonald’s and Coca-Cola actually pose a greater threat to your life than Al Qaeda and Islamic State.”

The New World

But once we have solved these problems for good, what will we do with ourselves as a species?

harari_ogilvydo

“The next big projects of humankind will be to overcome old age and death, to find the secret to happiness, and to basically upgrade humans into gods,” says Harari. And he’s not speaking figuratively, but literally. “For thousands of years, humans have imagined gods in a particular way; they’ve ascribed abilities and qualities to gods… And we are now seriously in the business of acquiring these traditional divine abilities and qualities for ourselves, whether it’s trying to overcome death, or the ability to create and design life according to our wishes.”

If anything, he says, humanity is reaching beyond what we think of as divine; we’re not just recreating the organic life we see around us, but brand new inorganic entities. Harari sees the rise of artificial intelligence as analogous to the Industrial Revolution. Just as the automation of manual labour led to the creation of a proletariat working class, so too now will the automation of more complex tasks lead to a new category of “useless” people.

“Algorithms don’t have to be perfect,” he says, “they just have to be on average better than human beings.” And once a machine knows you, and knows how to do your job, well — you become surplus to requirements. Which isn’t to say the AI uprising is in sight; he doubts that will happen for at least a few lifetimes.

Conveniently enough, Harari describes everything he is saying as “a distinct possibility, not a prophecy.” And he is the first to admit that he is a technophobe, but that this doesn’t preclude him from making these assertions. “For me, the most interesting questions aren’t the technical ones, they are the political, economic and philosophical ones… You don’t need to know how an atomic bomb works to know what it will do to a city, to speculate what that means for geopolitics or religion.”

As AI becomes an increasingly integral part of daily life, he predicts that jobs for philosophers will emerge, as ethical and moral questions become practical problems for engineers. And what of this new “useless” class? With the loss of economic importance comes the loss of political power, says Harari; for instance, the ability to unionise. Humans need meaning in order to be happy. Where, then, will we find this meaning, if not in rewarding work? “Drugs and video games,” apparently.

Selling The Human Race

Until now, the vast proportion of human achievement has been about achieving mastery of the world around us. The next phase, Harari imagines, will involve turning our gaze inward — learning how to fully harness our bodies and our brains.

This will require a convergence of the life sciences and computer sciences, as the true value of the data created by our bodies becomes apparent. He compares the current value exchange between consumers and tech companies over personal data to the purchase of Manhattan for a handful of colourful beads.

What worries Harari most is the potential privatisation of our future: “The decisions that will shape humankind and life itself are currently made by a very small number of people who don’t really represent anybody.” There needs to be some kind of shift, he says, so that people a) understand how serious this is, and b) are empowered to do something about it.

Ultimately, says Harari, “social survival in the 21st century will require a constant process of learning, changing, and reinventing.” Which is to say, human civilization won’t be wiped out by the AI revolution; it just might look a little different to how it does today.




There are no comments

Add yours