clock menu more-arrow no yes mobile

Filed under:

Yuval Harari on why humans won’t dominate Earth in 300 years

“It's not because I overestimate the AI. It's because most people tend to overestimate human beings.”

Ming Yeung / Getty

Yuval Noah Harari’s first book, Sapiens, was an international sensation. The Israeli historian’s mind-bending tour through the triumph of Homo sapiens is a favorite of, among others, Bill Gates, Mark Zuckerberg, and Barack Obama. His new book, Homo Deus: a Brief History of Tomorrow, is about what comes next for humanity — and the threat our own intelligence and creative capacity poses to our future.

I spoke with Harari recently for my podcast, The Ezra Klein Show. To hear our whole conversation, subscribe on iTunes (or wherever you get your podcasts) or stream it off SoundCloud. In this excerpt, which has been edited for length and clarity, Harari and I discuss the rise of artificial intelligence, whether digital consciousness is a necessary byproduct of digital intelligence, and what it will all mean for human beings.

As you’ll see, I’m a bit less convinced than Harari is that the computers are coming for our jobs, and that human beings are on the edge of economic uselessness. But I could very well be wrong, and he makes a good case that I am.

We also talk about virtual reality, and the possibility that we will manage the problem of economic irrelevance by retreating into artificial wonderlands that give us the meaning and the narrative that our daily lives deny us. Harari argues we’ve been applying that salve for a millennium now — we just called it religion.

Ezra Klein

Do you think that in 200 or 300 years, human beings will be the dominant actor on Earth?

Yuval Harari

Absolutely not. If you asked me in 50 years, it would be a difficult question, but 300 years, it's a very easy question. In 300 years, Homo sapiens will not be the dominate life form on Earth, if we exist at all.

Given the current pace of technological development, it is possible we destroy ourselves in some ecological or nuclear calamity. The more likely possibility is that we will use bioengineering and machine learning and artificial intelligence either to upgrade ourselves into a totally different kind of being or to create a totally different kind of being that will take over.

In any case, in 200 or 300 years, the beings that will dominate the Earth will be far more different from us than we are different from Neanderthals or from chimpanzees.

Ezra Klein

When I hear these kinds of arguments about AI’s eventual triumph, it often seems to me to be the most cerebral humans — your Elon Musks and Yuval Hararis and Bill Gateses — overestimating the importance of cerebral capabilities. But it's not clear that it was our analytical capabilities that allowed us to dominate. It was our cooperation and other factors.

Yuval Harari

I totally agree that for success, cooperation is usually more important than just raw intelligence. But the thing is that AI will be far more cooperative, at least potentially, than humans. To take a famous example, everybody is now talking about self-driving cars. The huge advantage of a self-driving car over a human driver is not just that, as an individual vehicle, the self-driving car is likely to be safer, cheaper, and more efficient than a human-driven car. The really big advantage is that self-driving cars can all be connected to one another to form a single network in a way you cannot do with human drivers.

It's the same with many other fields. If you think about medicine, today you have millions of human doctors and very often you have miscommunication between different doctors, but if you switch to AI doctors, you don't really have millions of different doctors. You have a single medical network that monitors the health of everybody in the world.

If right now, as we speak, an AI doctor in Timbuktu discovers a new disease or a new treatment, this information is immediately available to my personal AI doctor on my smartphone. Some of the biggest advantages of AI are in the field of cooperation, not in intelligence.

There is a lot of confusion about what artificial intelligence means or doesn't mean, especially in places like Silicon Valley. For me, the biggest confusion of all is between intelligence and consciousness. Ninety-five percent of science fiction movies are based on the error that an artificial intelligence will inevitably be an artificial consciousness. They assume that robots will have emotions, will feel things, that humans will fall in love with them, or that they will want to destroy us. This is not true.

Intelligence is not consciousness. Intelligence is the ability to solve problems. Consciousness is the ability to feel things. In humans and other animals, the two indeed go together. The way mammals solve problems is by feeling things. Our emotions and sensations are really an integral part of the way we solve problems in our lives. However, in the case of computers, we don't see the two going together.

Over the past few decades, there has been immense development in computer intelligence and exactly zero development in computer consciousness. There is absolutely no reason to think that computers are anywhere near developing consciousness. They might be moving along a very different trajectory than mammalian evolution. In the case of mammals, evolution has driven mammals toward greater intelligence by way of consciousness, but in the case of computers, they might be progressing along a parallel and very different route to intelligence that just doesn't involve consciousness at all.

We may find ourselves in a world with nonconscious super intelligence. The big question is not whether the humans will fall in love with the robots or whether the robots will try to kill the humans. The big question is how does a world of nonconscious super intelligence look? Because we've absolutely nothing in history that prepares us for such a scenario.

Ezra Klein

To me, that is the most interesting question about AI, and the one that I feel is almost always ignored. The reason we solve problems is because feelings drive us. The feeling of anger, the feeling of pain, these lead us to try to solve problems. And at a very base level, there’s the drive to reproduce, which is also mediated by feelings of love and lust.

So much of, not just human civilization, but the way all animals on Earth seem to operate is trying to secure reproduction for their species. The question that I always am stopped by when I try to imagine AI is what does super intelligence without the basic biological drivers of reproduction look like? Even if you imagine it would have something like consciousness, it wouldn't have our consciousness.

So AI would have powerful intelligence to solve problems, but what would its motivation be? Why would it want to solve those problems? Which problems would it want to solve? I feel so much of the AI conversation assumes that the AI will have the human desire for more, that it will have something and then it will want more things. It will become the best Go player, but it won't be willing to stop there. It will also have to be better than anybody else at Monopoly. It will also have to be better than anyone else at playing Guitar Hero on the PlayStation. But it isn't clear to me that would be true or what would make it true.

Yuval Harari

In the first generations of AI, you can say that the motivation will be determined by the people who program the AI, but as machine learning kicks off, you really have no idea where it might take the AI. It will not have desires in the human sense because it will not have consciousness. It will not have minds, but it could develop its own patterns of behavior which are way beyond our ability to understand.

The whole attraction of machine learning and deep mind and AI for the people in the industry is that the AI can start recognizing patterns and making decisions in a way that no humans can emulate or predict. That means we have no ability to really foresee where the AI will develop. This is part of the danger. The scenarios in which AI goes beyond human intelligence are, by definition, the scenarios that we cannot imagine.

Ezra Klein

Then why, given the range of uncertainty both about AI development and what an AI would look like, are you so persuaded that human beings will not be a dominant life form in 300 years?

Yuval Harari

It's not because I overestimate the AI. It's because most people tend to overestimate human beings. In order to replace most humans, the AI won't have to do very spectacular things. Most of the things the political and economic system needs from human beings are actually quite simple.

We earlier talked about driving a taxi or diagnosing a disease. This is something that AI will soon be able to do better than humans even without consciousness, even without having emotions or feelings or super intelligence. Most humans today do very specific things that an AI will soon be able to do better than us.

If you go back in time to the hunter-gatherer days, then it's a different story. It would be extremely difficult to build a hunter-gatherer robot that can compete with a human being. But to create a self-driving car that is better than a human taxi driver? That's easy. To create an AI doctor that diagnoses cancer better than a human doctor? That's easy.

What we are talking about in the 21st century is the possibility that most humans will lose their economic and political value. They will become a kind of massive useless class — useless not from the viewpoint of their mother or of their children, useless from the viewpoint of the economic and military and political system. Once this happens, the system also loses the incentive to invest in human beings.

Ezra Klein

Let me challenge you on that. Let's say this change takes place over 50, 100, 150 years.

Yuval Harari

Fifty years is really very quick.

Ezra Klein

I understand, but it's not necessarily so quick for the economy. A way of saying this is that in 1900, a huge proportion of the American labor force was engaged in farming. By 2000, that was not true at all. Farming was a very tiny percentage of the population as a percentage of workforce.

To your point about economic uselessness, we have replaced very "useful" jobs like farming with a lot of jobs like mine that aren’t objectively as useful. Does the world need me doing podcasts and writing articles? Does it need you writing interesting books about possible futures? Probably not.

What we are good at doing in the economy is telling stories about what we need. We tell stories about the green pieces of paper that form our money, and we tell stories about the value of the things we buy with those green pieces of paper. We manage to convince ourselves that grape juice, if you let it sit around long enough, becomes this amazing thing called wine and can be worth a thousand dollars!

We may get to a point where computers are driving taxicabs and we are just telling each other what we really need in life is more yoga teachers and meditation teachers. You can continuously say, "Well, then the computer will do that," but I'm a little skeptical that we will not be able as a species to find things that we decide add value.

I wonder a little bit about the conflation of people being useful with people being valued. Useful is a normative judgment on some level. Value is a story — we decide what has value — and we're good at creating stories about what we want.

Yuval Harari

I think the answer is on two levels. First of all, with regards to the possibility that new jobs will appear just as farm workers moved to factories and then they moved to services and now they are yoga teachers, the problem here is that humans have basically two kinds of abilities we know about: physical abilities and cognitive abilities.

In the past, as machines competed with us in physical abilities in the fields and in the factories, more and more humans moved to working in jobs that require mainly cognitive abilities. Now the machines are starting to compete in the cognitive field as well, and we don't know if a third kind of ability that all of us could move to work in that.

Ezra Klein

Can I offer one? Tom Friedman, who is good at putting things in nice little language capsules, like to say we're moving from jobs of the mind to jobs of the heart. It seems to me the ability humans have is that human beings enjoy interacting with other human beings. I could have a computer teach me yoga, but I don’t.

You’re a good example. You are going to a silent meditation retreat for 60 days. Certainly we already have computers that could collate online meditation information, that could read every book ever written about meditation and spit you out a printout, and then you could go off into a room on your own for 60 days and do it for no money — but you want those people, you want those interactions.

I actually think a lot of jobs in the economy are like this. I think even now, many jobs are actually useless. Books are in some ways an analogue to computers here — there is so much we could let books do for us that we don’t. I believe you teach at a university. You could just have everybody read the books, but people like having teachers. They like having TAs. They like being around other students. What human beings are skilled at, and have been for some time, is interacting with other human beings.

Yuval Harari

We are likely to see an immense advance in the computer's ability to read and understand human emotions better than humans can do it. If you go to the doctor, you want to have this warm feeling of a human being interacting with you. The way the doctor does it is by reading your facial expressions and your tone of voice and, of course, the contents of your words. These are the three ways in which a human doctor analyzes your emotional state and knows whether you're fearful or bored or angry or whatever.

Now, we are not yet there, but we are very close to the point when a computer will be able to recognize these biological patterns better than a human being. Emotions are not some mystical phenomena that only humans can read. In addition, the computer will be able to read signals coming from your body, which no human doctor can do. You can have biometric sensors on or inside your body and the computer will be able to diagnose your exact emotional state much better than any human being. Even in that, AI will have an advantage.

The other point is that what happened in the 20th century is that people who lost their jobs in agriculture got low-skilled jobs in factories, and when these jobs were gone, they got low-skilled jobs in services like being cashiers. The real problem in the 21st century is that the low-skilled jobs will disappear and we'll have a very big problem retraining people for high-skilled jobs. If you lose your job as a taxi driver when you're, say, 50 years old and you need to reinvent yourself at 50 as a yoga teacher, this is going to be very, very difficult.

Ezra Klein

The other side of the scenario you're laying out is a world in which what ends up happening is some kind of nonproductive, hyper-pleasure scenario. I honestly worry much more about the VR-dystopia than the AI-dystopia.

In a world where people lose their economic utility, it’s easy to imagine them retreating into virtual reality, which is already pretty damn good, and will be amazing 20 years from now. Do you imagine a possible future where we are trying to manage the problem of economic irrelevance through a massive societal distraction machine?

Yuval Harari

Yes, I think the other problem with AI taking over is not the economic problem, but really the problem of meaning — if you don't have a job anymore and, say, the government provides you with universal basic income or something, the big problem is how do you find meaning in life? What do you do all day?

Here, the best answers so far we've got is drugs and computer games. People will regulate more and more their moods with all kinds of biochemicals, and they will engage more and more with three-dimensional virtual realities.

This idea of humans finding meaning in virtual reality games is actually not a new idea. It's a very old idea. We have been finding meaning in virtual reality games for thousands of years. We've just called it religion until now.

You can think about religion simply as a virtual reality game. You invent rules that don't really exist, but you believe these rules, and for your entire life you try to follow the rules. If you're Christian, then if you do this, you get points. If you sin, you lose points. If by the time you finish the game when you're dead, you gained enough points, you get up to the next level. You go to heaven.

People have been playing this virtual reality game for thousands of years, and it made them relatively content and happy with their lives. In the 21st century, we'll just have the technology to create far more persuasive virtual reality games than the ones we've been playing for the past thousands of years. We'll have the technology to actually create heavens and hells, not in our minds but using bits and using direct brain-computer interfaces.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.