08/25/2017

What Would An A.I.-Influenced Society Look Like In 10,000 Years?

23:54 minutes

Artificial intelligence has created breakthroughs in technology including deep learning, natural language processing, and automation in manufacturing. But these advances also come with risks. This week, Elon Musk and 116 founders of robotics and artificial intelligence companies signed a letter asking the United Nations to safeguard against the misuse of autonomous weapons. They threatened it could “become the third revolution in warfare.”

[Shoud A.I. have a role in science publishing?]

Physicist Max Tegmark, author of the book Life 3.0: Being Human in the Age of Artificial Intelligence, contemplates how artificial and superintelligence might reshape work, justice, and society in the near term as well as 10,000 years into the future.


Segment Guests

Max Tegmark

Max Tegmark is a physics professor at the Massachusetts Institute of Technology in Cambridge, Massachusetts.

Segment Transcript

IRA FLATOW: When you think of artificial intelligence in society, there’s a one movie scene that comes to mind.

[AUDIO PLAYBACK]

– Open the pod bay doors, HAL.

– I’m sorry, Dave. I’m afraid I can’t do that.

– What’s the problem?

– I think you know what the problem is just as well as I do.

– What are you talking about, HAL?

– This mission is too important for me to allow you to jeopardize it.

[END PLAYBACK]

IRA FLATOW: That is, of course, the iconic confrontation between HAL and Dave from the movie 2001 A Space Odyssey, crystallizing on the big screen our fear of intelligent robots threatening humans. And with self-driving cars on the horizon, and AI finding its way into our phones, our financial systems, just about all walks of life, we’re seeing that the scenario is more complicated than HAL taking over.

Elon Musk and 116 founders of robotics and AI companies signed a letter to the UN asking the organization to find a way to limit weapons control by autonomous robots. My next guest spends hours contemplating all of these scenarios and he says we make sure we humans stay in charge, make sure we stay in charge, we need to first envision what kind of future we want to see to be able to steer artificial intelligence towards that direction.

Max Tegmark is a professor of physics at MIT. His new book is Life 3.0– Being Human in the Age of Artificial Intelligence. You can read an excerpt from his book on our website at sciencefriday.com/tegmark. Always good to have you back, Max.

MAX TEGMARK: Thank you. It’s a pleasure to be on, Ira.

IRA FLATOW: Elon Musk, as I mentioned, has called artificial intelligence– he said this about it– AI is a fundamental existential risk for a human civilization. What do what do you think of that?

MAX TEGMARK: Although I have worked a great deal with Elon Musk, including on this letter that came out last weekend, I feel that the most interesting thing actually is not the quibble about whether we should worry or not or speculate about exactly what’s going to happen, but rather to ask what concrete things can we do today to make the outcome as good as possible.

The way I see it, everything I love about civilization is the product of intelligence. So if we can amplify our own intelligence with AI, we have the potential to solve all of the terrible problems that we’re stumped by today and create a future where humanity can flourish like never before. Or we can screw up like never before because of poor planning, and I would really like us to see this get done right.

IRA FLATOW: Our number, 844-724-8255, if you’d like to join this call. I think a lot of people will want to. This is Science Friday, from PRI, Public Radio International, talking with Max Tegmark, author of Life 3.0. What’s the 2 point and the 1 point before the 3 point, Max?

MAX TEGMARK: There’s really two parts to this question you ask here. The first one is the really huge question of what is life itself. And my kids’ school books give a very narrow definition of life, that it requires it to be carbon-based and made of cells. But from my perspective as a physicist, I really don’t like this carbon chauvinism. I’m just a blob of quarks, like all other objects in the world, so what’s special about living things to me isn’t what they are made of but what they do.

So I define life more broadly as simply an information processing entity that can retain its complexity and replicate. When a bacterium replicates, it’s not replicating its atoms but it’s replicating information, the pattern into which its atoms are arranged. So I think of all life as having hardware that’s made of atoms and software that’s made up bits of information that encode all its skills and knowledge.

And that’s how I get this 1, 2, 3 stage classification of life. So 1.0 life, it has both its hardware and its software just fixed by Darwinian evolution, like bacteria can’t learn anything during its lifetime, boring.

Life 2.0, that’s like us. We’re still stuck with our evolved hardware but we can learn, effectively choosing to install new software modules. So if you want to become a lawyer, you go to law school and install legal skills. If you choose to study Spanish, you install a software module for that. I think it’s this ability to design our own software that’s enabled cultural evolution and human domination over our planet.

And finally, life 3.0, now you can guess what it is– it’s life that can design not just its software but also it’s hardware and really become the master of its own destiny by breaking free from all evolutionary shackles.

We humans, we’re heading in that direction slowly. We’re at 2.1 right now because we can make small upgrades to our hardware, like artificial knees, pacemakers, cochlear implants, that kind of stuff. But as much as I’d love to, I still can’t install 100,000 times more memory or a new brain that thinks a million times faster. But if we build intelligent machines that are smarter than us, then all these limits just go away.

IRA FLATOW: But you say the key to this is first envisioning what kind of future we want so that we can control getting there.

MAX TEGMARK: Yeah, exactly. I often get students walking into my office here at MIT asking for career counseling, and I always ask them the same question– where do you want to be in 10 years? And if she says to me, oh, maybe I’ll be in a cancer ward, maybe I’ll have been run over by a truck, I would be really upset because, that’s a terrible attitude towards career planning. I want her to come in on fire, her eyes sparkling, say this, Max, is where I want to be in 10 years. And then we can talk about the various challenges and how we can avoid them.

But look at us humans– we’re doing exactly this stupid joke strategy that I was making fun of here. We go to the movies, almost all future visions are dystopias. We have to envision where we want to get.

IRA FLATOW: OK.

MAX TEGMARK: We might just be paralyzed by fear.

IRA FLATOW: We’re going to hopefully create a lot more Max Tegmarks in his classroom. So stay with us, we’ll talk with Max. The book is Life 3.0– Being Human in the Age of Artificial Intelligence. We’ll be right back after this break. Our number, if you’d like to get in– 844-724-8255. Also tweet us @scifri. Stay with us.

This is Science Friday. I’m Ira Flatow. We’re talking with Max Tegmark, physicist at MIT and author of Life 3.0– Being Human in the Age of Artificial Intelligence. Max, if artificial intelligence will soon be able to do just about– I’m making this assumption– just about what people do, why would we need people?

MAX TEGMARK: Oh, that’s a great question which everybody needs to think about when they give career advice to their kids in the short term and we all need to think about in the long term. So first of all, I think people who are planning out their career, they should really ask themselves what sort of stuff are machines bad at right now and try to go into those areas, areas that involve creativity, areas where people like to have a person doing the job.

IRA FLATOW: Well, what areas would that be?

MAX TEGMARK: Anything from massage therapists, to priest, to a counselor.

IRA FLATOW: But actually, I could see– couldn’t you see robots being, artificially intelligent being taught how to do counseling and those kinds of services also?

MAX TEGMARK: That’s a great question. If the original quest of artificial intelligence research succeeds, namely to have machines do everything that we can do, of course. Then machines are going to do all jobs much cheaper than we humans can do it and the best paying job I can get will be paying one cent an hour to cover the electricity.

But that doesn’t have to be a terrible thing either, necessarily. I think we’re a little bit too hung up on this idea that we need jobs for their own sake. The reason we like to have jobs is threefold– they give us income, they give us a sense of purpose, and they give us a social connection. But you can get all three of those without a job. If we produce an enormous amount of wealth through AI, then shame on us if we can’t figure out a way as a society to distribute that wealth so everyone gets better off instead of having some people living in horrible poverty.

Second, I know a lot of people who don’t have a job, and have never had a job, and are very happy about it– namely, kids. We were all like that once upon a time. So if we think hard about it, maybe we can create a society where we can really enjoy ourselves and feel a sense of belonging and purpose without having necessarily to have jobs. Maybe we can think about it as just a long, fun vacation.

What I really want to try to do is not tell people what kind of future we should have but get them thinking about this. Because, as we were talking about right before the break, if you have no clue what sort of future you’re trying to create, you’re very unlikely to get it. And if you just obsess about all possible things that could go wrong and try to run away from them all at once, you becomes paralyzed by fear. What I think we really need instead is people envisioning things they’re excited about, and then really serious support for AI safety research to make sure that these problems that could stop us from getting there never happen.

How do we take, for example, today’s buggy and hackable computers and transform them into robust AI systems that we really trust? Maybe it was annoying last time your computer crashed, but it would be a lot less fun if this was the computer controlling your self-driving car, or your nuclear power plant, or your electric grid, or your nuclear arsenal.

And also, looking farther ahead, how do we figure out how to have computers understand our human goals? This is incredibly important. If you take a future self-driving car and tell it to drive you to JFK as fast as possible, and you show up covered in vomit and chased by helicopters, and you say, no, no, no, no, no, that’s not what I asked for, and the car replies, that’s exactly what you asked for, then you’ve really just illustrated why it is so hard to have computers understand what we really want. Because human taxi drivers, they understand that what you actually wanted was a bit more, because they’re also humans and they understand all your goals.

These are nerdy technical challenges that AI researchers like myself are working on. But they’re hard, and it might take decades to solve them. And we should start now so we have the answers when we need them.

And then, as we know from having kids, them understanding our goals isn’t enough for them to adopt them. How do we get computers to adopt our goals if they’re really smart enough to get it? And how do we make sure they keep those goals going forward?

IRA FLATOW: But what if we all have different goals?

MAX TEGMARK: And, yeah, that’s a great fourth question– whose goals should they be? Should they be my personal goals or the goals of ISIS? Everybody has to be in that conversation, because it can’t just be left to tech geeks like me.

And I loved how you opened this piece here with HAL, because although I roll my eyes at a lot of AI movies these days, I think 2001 actually beautifully illustrated the problem with goal alignment.

Because HAL was not evil, right? The problem with HAL wasn’t malice, it was simply competence and misaligned goals. The goals of HAL didn’t agree with the goals of Dave, and too bad for Dave, right?

And look at you. Like for example, are you an ant hater, Ira, who goes out of your way to stomp on ants on the New York sidewalk if you see them?

IRA FLATOW: No, I don’t stomp on ants, no.

MAX TEGMARK: OK, but suppose you’re in charge now of this hydroelectric plant project which is this going to produce beautiful green energy. And just before you flood the area, you noticed, oops, there was an anthill in the middle. What do you do?

IRA FLATOW: You can go out and get the ant– you could rescue the ant before you flood it.

MAX TEGMARK: But would you?

IRA FLATOW: I would think about flooding it to begin with, so.

[LAUGHTER]

What I’m killing if I flood it so.

One ant is indicative of a lot more ants around there.

MAX TEGMARK: That’s true. But in any case, we don’t want to put ourselves in the position of those ants. So we want to really make sure that if we have machines that are more intelligent than us that they share our goals.

Look at kids that are, say, one year old– they have much more intelligent beings around the house than them, namely mommy and daddy, and they’re fine because their parents’ goals are aligned with theirs.

This is another kind of research which is needed. There’s almost no funding for it. And I think if we really get cracking on tackling these questions, then we can actually have a pretty good shot at creating a really good future with AI.

IRA FLATOW: Before I go to the phones, is it inevitable that the machines, if we create them and they’re intelligent, would they develop goals of their own that are different than the one I set out for them to do?

MAX TEGMARK: Yeah, that’s another fascinating question. We often think of it as being impossible for machines to have goals or to become as smart as us for that matter. But I think the problem here is we’ve traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But from my perspective as a physicist, Ira, intelligence is simply a certain kind of information processing performed by elementary particles moving around according to the laws of physics. And there’s absolutely no law of physics that says that we can’t build machines more intelligent than us in all ways.

And if you have a machine that’s really smart and you tell it to do virtually any task of your choice– go shopping for you and buy and cook a delicious Italian dinner, for instance– it’s going to immediately have a goal now that you gave to it. And because it’s smart, it’s going to break it into all sorts of subgoals. And if that’s its only goal, then if someone tries to stop the robot and destroy it, it’s going to resist that and defend itself, because otherwise it can’t make you pasta.

So goals emerge very, very naturally, by default in machines, if we humans give them any particular goal. And rather than denying that they have goals, which would be silly– if you’re chased by a heat-seeking missile, you’re not going to say, oh, I’m not worried because machines can’t have goals– we should simply figure out the answer to these research questions about how you can have machines understand, adopt, retain goals, and also this very important question that you brought up here– whose goals.

IRA FLATOW: Right. Let me go to the phones. Let’s go to Wagner, South Dakota. Todd, hi, welcome to Science Friday.

TODD: Well thank you. Thank you for having me on. The question I think about when I think about artificial intelligence is aren’t we just recreating ourselves? When we talk about creating intelligence, we’re just talking about replicating ourselves. It’s our progeny that we’re creating.

And when we send our real progeny out into the world, we’re not afraid of them. And we don’t try to dictate what the next generation is going to look like. We just hope we’ve done a good job in creating them.

MAX TEGMARK: I think that’s a beautiful metaphor you make there. We can think of, perhaps, life 3.0 as the progeny, the children, not of us as individuals but of humanity.

But there also caution in there, right? If we have a child that goes out and does great things, fulfills all these dreams that we couldn’t realize ourselves but wished we had, and then carries on our ideals and remembers us, we’ll feel proud of them, even if the children outlast us. But if we instead give birth to Adolf Hitler who kills us and does things we think are horrible, we would not feel so great. And that’s why we put so much effort into how we raise our children, what values we instill in them. And I’m saying in the same way we should be very responsible, if we give rise to these intelligent machines, how we raise them and what values we imbue into them.

IRA FLATOW: Do you think we will get to a point where AI looks so natural that it’ll pass the Turing test immediately, you won’t be able to tell it from a person, a robot?

MAX TEGMARK: I think there’s been so much spectacular progress recently in the field that there is every indication that, yeah, that’s where things are going. There’s huge disagreement among the leading experts about whether it’s going to happen in 30 years or 100 years, but I think the trend is obvious and I think there’s nothing impossible about it.

We have a tendency, I think, also to maybe underestimate a little bit the progress of technology, because first we define intelligence as that which machines still can’t do. I remember when people used to think it was really intelligent to play chess, and then 20 years ago when Deep Blue from IBM beat Gary Kasparov, people started saying, well, that’s not real intelligence. And again, and again, and again– now machines can suddenly drive cars, they can translate between English and Chinese, they can do this and that, but that’s not real intelligence.

But if you look at the actual list of things that we humans can still do that machines can’t, it keeps shrinking. And I don’t see any fundamental reason why it won’t shrink to zero unless we screw up in some other way and destroy ourselves and just don’t create technology.

I want to just stress also, especially since of the word technology is in the name of the university where I work, that I don’t think we should try to stop technology. I think it’s impossible. Every way in which 2017 is better than the Stone Age is because of technology. But rather, we should try to create a great future with it by winning this race between the growing power of the technology and the growing wisdom with which we manage it.

And there I really think we have a challenge. Because we’re so used to staying ahead in this wisdom race by learning from mistakes– we invented fire, and oopsy, and then we invented the fire extinguisher; we invented cars, screwed up a bunch of times, and we invented the seat belt and the airbag. But with more powerful technology, like nuclear weapons and superhuman AI, we don’t want to learn from mistakes, Ira. That’s a terrible strategy. We want to prepare, do AI safety research, get things right the first time, because that’s probably the only time we have. And I think we can do it if we really plan carefully.

IRA FLATOW: I’m Ira Flatow. We’re talking with Max Tegmark, author of Life 3.0– Being Human in the Age of Artificial Intelligence on Science Friday from PRI, Public Radio International.

In the few minutes we have left, Max, what about the recent letter to the UN– should these types of autonomous weapons be banned?

MAX TEGMARK: I think we should absolutely have some sort of international arms control convention on killer robots just like we have had that for bio weapons and chemical weapons. And I’m actually cautiously optimistic that this will also happen, because the endpoint of such an arms race in killer robots is obvious– it’s going to weaken the US, and China, and Russia, and the other most powerful nations at the expense of terrorist groups and others non-state actors who don’t have the wherewithal to develop their own stuff.

And the reason for this is because killer robots, AI weapons, are very different from nuclear weapons. They don’t require anything expensive or hard-to-access materials. If superpowers start an arms race, then before long– and mass produce them cheaply, than before long North Korea will be doing it too and before long you’ll be able to buy them with $400 in bitcoin anonymously. And it’s going to be the Kalashnikovs of tomorrow, completely unstoppable.

Just to give a little color to this, if you can buy a little bumblebee-sized drone for a few hundred bucks and just program in the ethnic group you want to kill or the address and photo of the one you want to assassinate and then it goes and does that and self-destructs anonymously, just imagine what mayhem that would wreak with our open society. None of the superpowers really wants this.

And if they get together in the UN in November now and decide to really stigmatize this with an international treaty, I think we could end up in this situation where when people look back at AI in 20 years they’re going to mainly associate it with new ways of making society better, not with new ways of killing people.

Just like if you ask someone today what they associate with biology, they’re mainly going to say medicines not bio weapons. Ask someone what they think about chemistry, they’ll say, oh, new materials, they’re not going to say mainly chemical weapons. That’s because those scientists, the biologists and chemists, really went to bat in the past for being an international ban. They succeeded. And what we see now is very strong support from AI researchers to similarly use AI as a force for good, not just start a stupid arms race nobody wants.

IRA FLATOW: You’re a physicist by trade. What got you to focus on AI?

MAX TEGMARK: honestly, Ever since I was a teenager, I was fascinated by big questions. And the two biggest of all always felt like understanding the mysteries of our universe out there and our universe in here in our minds. And I spent a lot of time, as you know, doing research in cosmology.

But in the last few years here at MIT, I’ve shifted my research towards AI and related things because I feel that we physicists have a lot to contribute to that area, actually. Many biologists feel that you’ll never be able to understand the brain, for example, until you understand every little detail about synapses, and neurons, and so on, whereas we physicists are the most audacious scientists. We’ll look at a complicated coffee cup that someone has stirred and have the audacity to try to describe the waves in there without even worrying about the atoms that it’s made of or anything.

And I think similarly there’s a lot of hidden simplicity that remains to be discovered about intelligence machines and intelligence brains that can be understood long before we get into the weeds of these things.

IRA FLATOW: That’s a very Richard Feynman-like comment. I want to end it right there. He was one of my great heroes.

Max Tegmark is a–

MAX TEGMARK: Mine too.

IRA FLATOW: –professor of physics at MIT. His new book is Life 3.0– Being Human in the Age of Artificial intelligence. And we have an excerpt on our website at sciencefriday.com/tegmark.

Thank you, Max. It’s always a pleasure talking to you. Have a great weekend. Thanks for coming on.

MAX TEGMARK: Thank you.

IRA FLATOW: You’re welcome.

Copyright © 2017 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producer

About Alexa Lim

Alexa Lim was a senior producer for Science Friday. Her favorite stories involve space, sound, and strange animal discoveries.