10/20/2017

Science Goes To The Movies: Blade Runner 2049

16:44 minutes

In the blockbuster sequel to a 1982 sci-fi thriller, Blade Runner 2049 tells a story where synthetic human “replicants” are the enslaved labor force that runs the world—or at least, what’s left of it. Meanwhile, one replicant, a cop named K, tries to understand both his origin, and what it means to be human. But exactly how close are we to building … or growing … something that walks and talks and bleeds like us? And should we? Roboticist Angelica Lim of Simon Fraser University and bioengineer Terry Johnson at the University of California, Berkeley weigh in.

[How to protect (and destroy) homes with Mr. Safety,]


Segment Guests

Terry Johnson

Terry Johnson is a professor of bioengineering at University of California-Berkeley and author of How to Defeat Your Own Clone: And Other Tips for Surviving the Biotech Revolution. He’s based in Berkeley, California.

Angelica Lim

Angelica Lim is an assistant professor of Computing Sciences at Simon Fraser University in Burnaby, British Columbia, Canada.

Segment Transcript

IRA FLATOW: This is “Science Friday.” I’m Ira Flatow.

MAN: The most spectacular science shocker ever filmed. Too real to be science fiction. Now science fact.

IRA FLATOW: Ah, yes. That sound signals another edition of “Science Goes to the Movies.” And this week, I thought it was time to talk about “Blade Runner 2049.” And if you were a fan of the original, you don’t need much introduction.

Human-like replicants are doing humanity’s dirty work on Earth and colonies out of this world. Meanwhile, a cop named K, a replicant cop named K is trying to learn where he came from himself. In a story with flying cars, fake memories, and huge industrial farm complexes, there’s a lot of science fiction to pick apart.

But the biggest question I have is, how do you make a replicant. What kind of technology, whether gene editing or AI, would we use? That’s what we’re going to be talking about next.

Angelica Lim is assistant professor of computing sciences at Simon Fraser University in British Columbia, Canada. Terry Johnson, Professor of Bioengineering, University of California at Berkeley.

[ALARM SOUNDING]

That button means yes, we’re going to be talking about spoilers. So if you haven’t seen the movie yet, cover your ears, turn off the radio. Listen to this podcast later. Or just accept your fate and go along with us.

Let’s talk about the film. Terry Johnson, all right. How did you like the film?

TERRY JOHNSON: I very much enjoyed it.

IRA FLATOW: Because?

TERRY JOHNSON: I thought that it did a really good job of suggesting different options for the science behind it and making you figure out how you felt about it.

IRA FLATOW: Oh, that’s cool. Angelica, what about you? Did you like it?

ANGELICA LIM: I have to say that I went in hoping that I would really like it, and I was a bit disappointed.

IRA FLATOW: Because?

ANGELICA LIM: There were a lot of new ideas in science fiction that had been explored just previously in other science fiction films, like “Her” that came out a couple of years ago and “Westworld.” So nothing really came out as really new for me.

And there was this kind of– this is based off of “Do Androids Dream of Electric Sheep” from the 1960s. So the ideas were very dated, the ideas of having an AI that was housemaids and servitude. All of these ideas felt a little bit dated. So I didn’t really appreciate that too much.

IRA FLATOW: Let me let me ask the bioengineer here, Terry. Like the original “Blade Runner,” the replicants, those human-like artificial beings, they are the stars of the show. But we’re never quite told how they were made. What makes the most sense? Are they biology, the replicants, or are they like more robotics like in “Westworld?” We’re never told that.

TERRY JOHNSON: It seems like they want to tow the line. There’s a scene in the original that suggests that the various biological parts– the eyes are made by one person. It’s sort of like an assembly line that’s all put together at another point, which would definitely be the hard way to construct any sort of biological, human-like thing like a replicant.

But I think that the science there is more about challenging you. Does the fact that these biological pieces are constructed separately and put together as opposed to born– does that change the way you feel about the person in front of you?

IRA FLATOW: Angelica, what are your feelings about it?

ANGELICA LIM: Well, the question there, then, from my perspective as a roboticist is could we build things that looked like replicants from a robotics perspective. Probably start out with skeletal frames that are made of metal and then build a more organic type. Today, there are silicone– very human-like looking robots. In Japan, for example, there are the Geminoids that look from the outside almost indistinguishable from humans as long as you don’t interact with them for too long.

But then the question is why is it in “Blade Runner” that when one of them is cut that they bleed. Why would we go to that extent? And that’s questionable for me.

IRA FLATOW: And could it not, in this day and age of synthetic human genomes and building things with DNA– could they not be more of a hybrid of both biology and mechanics?

ANGELICA LIM: Well, I do know that at least coming from the human side, there are people that start to put things onto their bodies. These are called human cyborgs, whether it’s trying to put a sensor on their tongue or external to their body and using that as another way to sense the world– for example, using a kind of camera to feel colors or that sort of thing. So that’s starting to exist today.

TERRY JOHNSON: And there’s the hints– oh, I was going to say there’s the hints in the movie. Wallace sees via these remote drone cameras that seem to have a connection with his nervous system. So it’s unclear whether the replicants are completely biological or to what extent they’re robotic or inorganic.

IRA FLATOW: Angelica, you talked about how robotics is advancing. And we know about soft robotics now, and things that feel more like skin. Could we be moving in that direction, to make them more real-looking and feeling?

ANGELICA LIM: Certainly on the hardware perspective. So I lived in Japan for about six years. And there was one in particular called the HRP4C. And this is a robot that could walk. It could move around– an Android with very realistic silicone skin on her face and on her hands. So that exists there.

I think the real difficulty these days is the AI, the software, the control, the understanding of the world. And that’s where I think the challenge is coming up.

IRA FLATOW: The replicants in the movies are supposed to be programmed to obey, Terry. If we see the replicants as genetically engineered, how would that work?

TERRY JOHNSON: I think it’s pretty clear based on what we know that that wouldn’t be a matter of any sort of genetic engineering. Obedience is even more complicated a concept than intelligence. The idea that even a reasonably small number of genes could be manipulated in some way to cause behavior as complex as obedience to a corporation, I think, is beyond what people are considering in this.

It seems likely that may be more corporate PR than science. The evidence that we see of obedience is really based on a psychological test, the baseline test, which took over for the Voight comp tests in the original. And the Voight comp test is about is this a replicant or not.

The baseline test is about is this replicant still going to obey. Is this replicant not yet so disturbed by what it’s forced to do and what’s been done to it that it could continue to function? And if it fails the baseline test, it’s quickly retired.

IRA FLATOW: And what about the ways in which their lifespans vary?

TERRY JOHNSON: The idea of programming the lifespan in an organism is probably a little bit more scientifically an option. I don’t think that we know how to do it. But the hints from the first movie about looking to people with advanced aging diseases suggests that something like that might be possible, but at the very least, very far off.

IRA FLATOW: Angelica, your work focuses on how to bring emotion into AI. How do we do that? And how do you think it was accomplished in the film?

ANGELICA LIM: Well, there was an AI named Joy, and she was especially expressive. And I think the tagline for her was that she knows what you need and tells you what you want to hear. This is a kind of super empathy, isn’t it, understanding what the human is feeling and thinking and responding in kind?

And so that, today, is very difficult. And it’s hard to do. We do have emotion detectors which are able to look at your face and tell if you’re smiling or frowning and that sort of thing. But to go even further and understand– so Ryan Gosling– love the actor.

But if you look at his– he’s not super emotionally expressive. So how would an AI be able to understand that this is what he’s thinking and feeling, and if this happens, then that would also mean that? So there’s a lot of entertainment behind that part of the movie.

IRA FLATOW: We don’t know if he’s a replicant or not by the end of the movie. No spoilers.

Well, what did you make of Joy? You brought up Joy as a super-advanced AI assistant for K.

ANGELICA LIM: Yeah. Well, take a look at what we have today in the home. We have Alexa, the Amazon Echo, which is getting more and more popular. These are chat bots. We have chat bots online, and even ones that are able to fool people into thinking that they’re humans at least if you interact with them for a few number of minutes. If you interact with them for 20 minutes, you realize that it’s just a chat bot.

But this kind of entity that looks so much like a human and can interact so much like a human is pretty impressive. And I think there’s half of the story here. The expression that we see in her– that can be done today. We’ve seen this in movies. This is no problem. Artists know how to do that.

But again, the fact that she has all of this understanding of the worlds, that she has these memories that are backed up in the cloud and that builds up her whole persona and all of her personality, and that she can even say that, OK, if the police come and they see all of my memories, they’re going to get you, so we better erase them all and put myself on this– this is a huge spoiler, by the way– put all of my memories on this stick.

That’s amazing. What goals have you programmed into this AI for it to do that? But that’s pretty interesting.

IRA FLATOW: Putting memories on a stick has been very popular in a lot of sci fi now.

ANGELICA LIM: It has been, because it gives vulnerability. We think of these AIs as superhuman, and they exist on the cloud until they’re deleted. And so as soon as she gets onto the stick, suddenly we feel so much more emotional attachment to her. Anything that happens to her could lose her forever.

And so that’s an interesting idea. So on an ethics side, though, would we want to have entities that we would get so attached to, though, that we might do something drastic for them? For example, in the last scene when she’s on the stick and she’s about to get crushed, let’s say that in order to save his love, Ryan Gosling, his character, goes and puts himself in the line of danger.

Is that OK? Is that OK to have objects that we are so emotionally attached to that we would risk our lives for them? And so I’d hope that we would keep all of our stuff backed up in the cloud. But then, of course, we’re thinking about privacy. Maybe we don’t want that to happen. So it’s a big ethical question.

IRA FLATOW: I’m Ira Flatow. This is “Science Friday” from PRI, Public Radio International. Talking with Angelica Lim and Terry Johnson about the new “Blade Runner” movie and the idea of the replicants in them.

And Terry, we mentioned “Westworld” a little bit. I want to bring that analogy up again a little bit. Because in “Westworld,” there’s also this underlying theme of what is real, what is human. And the “Westworld” robots define their humanity by their ability to think for themselves. You’re seeing them actually thinking thoughts they were not programmed with.

On the other hand, in “Blade Runner,” the humanity here seems to me to be– the definition of it is whether you can reproduce like a human.

TERRY JOHNSON: Yeah. The idea of where do you draw the line is probably one that this society has had to grapple with for a long time. And I can imagine in the intervening decades when these new replicants come out, they’re being sold as much safer than the previous replicants. But I can imagine that this is a conversation that the company has been having with society for a long time, that maybe this is where they would like to draw the line. Because it’s convenient for their product.

It’s probably not considering there are plenty of people now without reproductive capacity, a great line in the sand to draw about who deserves rights or not. But in the context of the story, it might make sense.

IRA FLATOW: It’s kind of a miracle, though, to make a robot reproduce, when you think about it.

TERRY JOHNSON: The idea of having these replicants that were designed most likely unable to self-replicate, unable to reproduce, and to have that design decision be carried through from the Nexus 1 to the Nexus 2 to all of these different versions, and then trying to go back in the design and recapture something that was quickly turned away from decades previous, you can imagine the engineering difficulty to do something that the original design– because this was based on people– are capable of doing.

IRA FLATOW: Angelica, it’s interesting in that K starts out knowing he’s a replicant. If we’re going down the path of a replicant is a robot, is it a good idea that he knows he’s a replicant?

ANGELICA LIM: Absolutely, yeah. I think it’s really important. Robots should know that they’re robots, and humans should also know that an entity is a robot or not.

A few years ago, there were a set of rules called the UK Principles of Robotics that were published. And one of the rules includes transparency. Any robot should be able to be clearly identified as a robot– again, going back to the ethical issue of we should be able to know that this is a robot that’s not like a human. It doesn’t have the same capabilities, but also the same properties as a human.

For instance, a robot could be working 24/7. A robot could be accessible any time. We shouldn’t have to think that, oh, this robot– this human, at least to us, is vulnerable. We should know that it’s a robot. It can upload its memory into the cloud, and it’ll be fine. So definitely, it should know it’s a robot, and so should we.

You can make your own judgment on this. Go see a “Blade Runner 2049.” And then if you play our podcasting, you can relive our conversation all over again.

I want to thank both of you– Angelica Lim, assistant professor of computing science at Simon Fraser University in British Columbia, Terry Johnson, professor of bioengineering at the University of California at Berkeley. Thank you both for taking time to be with us today.

TERRY JOHNSON: My pleasure.

IRA FLATOW: Have a great weekend.

Copyright © 2017 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producer

About Christie Taylor

Christie Taylor was a producer for Science Friday. Her days involved diligent research, too many phone calls for an introvert, and asking scientists if they have any audio of that narwhal heartbeat.

Explore More