05/18/2018

Consciousness At The Center Of ‘Westworld’s’ Maze

28:52 minutes

Credit: John P. Johnson/HBO

In HBO’s series Westworld, human-like robots populate a theme park where human guests can have violent, gory adventures in the Wild West without the repercussions. The robots are so lifelike that they fool the visitors and themselves. They bleed, die, grieve, and love—thinking themselves human.

“The human mind is not some golden benchmark glimmering on a green and distant hill,” park owner Robert Ford at one point tells his colleague Bernard, whose robotic nature remains unknown even to himself for much of the first season.

[A case for why time may just not exist.]

Ford is convinced that there’s nothing special about human consciousness that marks the difference between a robot’s mind and ours. But as Westworld’s robots grow increasingly independent of their repetitive, programmed loops, the show incites viewers to question whether AI can truly be autonomous or conscious—and who in this story deserves empathy.

Ira discusses the show’s science and social commentary with Texas A&M University roboticist Robin Murphy and Boston University neuroscientist Steve Ramirez.


Interview Highlights

On the roadblocks of reaching a more human-like robot.
Robin Murphy: I think when you’re talking about something like entertainment robots and things like Sophia, Geminoid, these robots that look like people, we’ve made a lot of progress in the physical realism and the voice generation and eventhe social cues and social conventions that people use… That’s really, really good. The underlying intelligence that allows you to go from a very restricted domain, if we’re talking about just about some type of repair or making an appointment, to a broader conversation like we’re having now, that requires so much more intelligence. We have to have a common ground.

On the uniqueness of human consciousness.
Steve Ramirez: When we start talking about things like consciousness, we define it as our ongoing sense of self awareness that’s realized in the brain. And I think that it’s special because it feels untouchable. I study memory and we say the same thing, that there’s a lot that feels ephemeral and feels as though it doesn’t have some kind of biological substrate. But now that we’re starting to portray consciousness on shows like Westworld or study consciousness in humans or perhaps the kinds of consciousness that other mammals for instance might have, I think it’s special and not special. It’s “special” because that sense of self-awareness is an amazing thing that arguably defines part of our humanity. It’s “not special” in the sense that it still yields to the scientific method and it’s still something that is realized in the 86 billion brain cells stuffed between my ears.

[How close are we to building replicants like in Blade Runner 2049?]

On Westworld’s pyramid scheme of consciousness.
Steve Ramirez: I think it’s a useful metaphor, but it kind of makes consciousness sound like it exists in this kind of ladder-like scheme whereas we know that there’s different aspects of things that we’re conscious of—whether it’s when you’re staring at a sunset or listening to music or having a conversation with yourself internally, for instance. So there are language components, memory components, the component of being able to understand what other people are thinking. I think all of those, rather than being a pyramid per se in the brain, it’s more like a soup of modules in the brain.

On Isaac Asimov’s “Three Laws of Robotics.”
Robin Murphy: I love Isaac Asimov, but the Three Laws of Robotics were set up explicitly to sound good, that have all of these subtle ramifications that could power plots because it’s ambiguous. And so I always, as you can tell, cringe when people say, “Oh, robots should follow the Three Laws of Robotics.” Well, that means that they will inherently be screwed up because they can’t follow them. It was set up to have those disconnects.

On whether we can build realistic AI without something like memory.
Robin Murphy: No, we can’t… For us to talk, we have a common ground, which is memories—how we’ve built up our understanding of the world. We also have a lot of built-in emotion. One of the things that I disagreed with is how Westworld [portrayed]… this idea of emotions that are so basic. They help regulate those ladders of intelligence, those modules in between there. So we don’t see how you can actually get to that conscious-level without memories, without this common ground, without scripts of how the world works.

[Is memory manipulation the stuff of Hollywood, or a glimpse into the near future?]

Steve Ramirez: I think it ends up being like you really can’t have one without the other, where they’re both intertwined. Memories thread and unify our overall sense of being, so we see that in cases like in dementia-like states where you can be conscious in the moment but your overall sense of identity over time loses that kind of common denominator that is memory. And actually, that was one of the most fascinating things to me about how this is portrayed in Westworld, because when you look at characters like Dolores or Maeve [who are] having sort of memories of the previous iterations start creeping back in, you can’t help but ask the question, “Well, do we forget?”

Our iterations are basically like when we go to bed and we wake up in the morning or perhaps who we were when we were two years-old and three years-old, which most of us can’t remember, whereas what we did yesterday we remember it. So I think it really gets to the fundamental question of, “Are you the same as your previous iteration?”, except in Westworld it’s displayed of course by fixing the host. But I thought it was interesting because it also brings to light the question of if those memories are there, then are there ways to actually try to tinker with them and bring them back even though they were once thought to be lost? And I think that’s actually a very real scientifically tractable question.

On shifting sympathies from humans towards robots in popular culture.
Steve Ramirez: It ends up being a great vehicle for entertainment because it kind of flips the narrative on its head a little bit where we’re used to siding with the human because, of course, we’re humans, but [for instance]… when you look at the character of Bernard and you can see how well he basically had faked everybody to think that he was human and then ends up being a robot. So I think it’s basically to force us to to feel a little bit uncomfortable by being able to say, “Wow, I actually found myself relating to what I thought was human, which actually ended up being a robot.” …Whereas other shows might have been more clean and cut—we side with the humans, the robots take over, the humans win. I think in this case [in Westworld], there’s more layers to the narrative, which I think just makes it a more enticing show.

[From Dothraki to Valyrian, building the languages of “Game of Thrones.”]

On whether robots could eventually evolve a consciousness on their own.
Robin Murphy: We’ve certainly seen that in science fiction [and] speculated in the popular press that we’re just going to have this emergent behavior, this self-awareness come out of that. Again, It really violates what we know about how we program with the bounded rationality where we also have these explicit boundaries, these limits on initiative. How it would jump out, we don’t know how to write that code. We don’t foresee writing that kind of code.

These interviews have been edited for space and clarity.


Segment Guests

Robin Murphy

Robin Murphy is a professor of Computer Science and Engineering at Texas A&M University in College Station, Texas.

Steve Ramirez

Steve Ramirez is an assistant professor of Psychological and Brain Sciences at Boston University in Boston, Massachusetts.

Segment Transcript

IRA FLATOW: In HBO’s futuristic series, Westworld, eerily human-like robots play living, breathing, bleeding, grieving, and loving actors, in a Wild-West-style theme park, where guests can live out their violent and heroic fantasies. As the Westworld robots pursue their assorted destinies this season, we want to talk about and compare what are our standards of realism for robots and artificial intelligence compared to what they’re doing in Westworld. What should they be today, in 2018? Or the question, Bernard, a programmer, who had learned he’s actually a robot asks in this clip–

[AUDIO PLAYBACK]

– I understand what I’m made of, how I’m coded. But I do not understand the things that I feel. Are they real, the things I experienced?

[PLAYBACK ENDS]

IRA FLATOW: Good question, in a world where technologists are trying to create AI robots, our world, should there be any differences between human beings and machines? Are there dangers in aiming to make robots as human-like as possible? That’s we’re going to be talking about this hour. Let me introduce my guest, Robin Murphy, professor of computer science and engineering at Texas A&M University in College Station, Texas. She joins us by Skype. Welcome to Science Friday.

ROBIN MURPHY: Howdy.

IRA FLATOW: Howdy. Steve Ramirez, assistant professor of neuroscience at Boston University. Welcome to Science Friday.

STEVE RAMIREZ: Thank you very much for having me.

IRA FLATOW: Dr. Murphy, are the Westworld robots the benchmark that the roboticists are striving to get at, at this point, do you think?

ROBIN MURPHY: No, it’s interesting to see that everybody thinks that if you’re a roboticist working on artificial intelligence you’re trying to create a human replicant, a peer level intelligence that is a replacement or substitution for a human. Most of us are looking at trying to make robots or systems that complement human abilities, that assist them, or do things that people can’t do. This quest for general AI, not so much.

IRA FLATOW: And how close have we gotten to them? What is the general definition of where you want to go with this?

ROBIN MURPHY: Well, I think when you talk about something like entertainment robots, and things like Sophia, Geminoid, these robots that look like people, we’ve made a lot of progress in the physical realism, in the voice generation, and even like the social cues and social conventions that people use, the kind of segue, and to give clues as to where we are in the conversation, that you’re seeing Google Duplex starting to use, that’s really, really good.

The underlying intelligence that allows you to go from a very restricted domain– we’re talking about just some type of repair, making appointment to a broader conversation, like what we’re having now, that requires so much more intelligence. We have to have a common ground. We have models of the– I have a model of belief of your desires and intentions for this conversation that I’m using to try to tailor my answers to your questions to.

And also, if we were physically together, you might be looking at something, and I would be connecting what you were saying to what your focus of attention was. So there’s a huge amount of that stuff that we don’t know how to do yet.

IRA FLATOW: Dr. Ramirez, what is so special about our minds. When we say that AI acts human, how do we define what that is if we’re trying to aim for that?

STEVE RAMIREZ: Yeah, I think it’s special because when we start talking about things like consciousness, we define it as our ongoing sense of self awareness that’s realized in the brain. And I think that it’s special because it feels untouchable. And I study memory, and we say the same thing, that there’s a lot that feels ephemeral, and feels as though it doesn’t have some kind of biological substrate.

But now that we’re starting to portray consciousness on shows like Westworld, or study consciousness in humans, or perhaps the kinds of consciousness that other mammals, for instance, might have, I think it’s special and not special. It’s special because that sense of self-awareness is an amazing thing that arguably defines part of our humanity. It’s not special, in big quotes, in the sense that it still yields to the scientific method, and it’s still something that’s realized in the 86 billion brain cells stuffed between our ears.

IRA FLATOW: So are you saying that you believe that robots, AI, can eventually create something like consciousness?

STEVE RAMIREZ: My gut feeling, because I don’t like waffling on this question, is yeah, definitely. I mean, it’s something that I think that they can experience perhaps, or one day will experience a kind of consciousness, but it’s probably not anything, or it’s probably not exactly like what we experience, because they’ll of course being made up of different stuff.

But if Bernard’s character, for instance, can have us convinced that what was a human actually turned out to be a robot, then he not only pass the Turing test, but actually displayed that level of self-awareness of what he doesn’t know about what these feelings mean. And we’ve all felt that as well, where maybe the first time you fall in love, it’s this strange thing that you’re not sure how to define, but you know that it’s there somewhere in the brain. So now my answer to that would be, I hope so, for a bunch of different reasons.

IRA FLATOW: Let me tell you what they– let me play a little clip of what they say on the series, on Westworld. One of the park’s founders claims there is no significance between human consciousness and AI.

[AUDIO PLAYBACK]

– There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can’t define consciousness, because consciousness does not exist. Humans fancy that there’s something special about the way we perceive the world, and yet we live in loops, as tight and as closed as the hosts do, seldom questioning our choices, content for the most part to be told what to do next.

[PLAYBACK ENDS]

IRA FLATOW: Robin, what’s your reaction to that clip?

ROBIN MURPHY: I love Westworld, and I love clips like that. I mean, that’s just wonderful, because that’s exactly what we’re seeing. There’s no threshold, there’s no like– and then, bingo, you’re now conscious. This is what we’ve seen all along in the development of intelligence systems. There’s a spectrum. And so getting to see that reflected in popular science, popular media, is great, because it makes our job much easier.

But also that idea is what is consciousness, I’m not qualified to talk about that. But certainly this idea, that we’re living in loops, the behavioral aspects that we see in intelligence, that there is– what makes something– we’ve been arguing about what is intelligence for a long time. A cockroach seems pretty darn intelligent. When you’re trying to duplicate that, being able to navigate through environments it’s never seen before, all these things start to add up. So it’s very exciting. I really like the show and those types of comments.

IRA FLATOW: I’m Ira Flatow. This is Science Friday from WNYC Studios. And let me give out the number, because a lot of people would like to talk. 844-724-8255 is our phone number, if you want to talk about Westworld, and consciousness, and the robots. You can also tweet us @SciFri.

The programmer, Arnold, we’ve been talking about him. He’s actually, I guess the robotic form of him is in Bernard. No spoiler here, if you’ve been watching the series. He builds a pyramid theory of consciousness, memory, improvisations, self interest. And then later he reframes it, perhaps consciousness is an inward journey. Is this consistent with science, Steve?

STEVE RAMIREZ: I think it’s a useful metaphor, but it kind of makes consciousness sound like it exists in this kind of ladder like scheme, whereas we know that there’s different aspects of things that we’re conscious of, whether it’s when you’re staring at a sunset, or listening to music, or having a conversation with yourself internally, for instance.

So there’s language components, memory components, our ability to understand what other people are thinking components. And I think all of those, rather than being a pyramid, per se, in the brain, it’s more like a soup of modules, I think, in the brain.

IRA FLATOW: And Robin, how would you react to that?

ROBIN MURPHY: I think that Steve just nailed it on the head. In artificial intelligence, we’ve tried to, after the initial forays into artificial intelligence in the ’60s, we started looking at biological intelligence. We started doing industrial espionage. We weren’t making a lot of progress. So we started looking at biological intelligence. And that ladder like schemes, and that soup of modules describes the best architectures that we’re coming up for for artificial intelligence and robotics.

So you start off with your very basic behaviors, your motor schemas, those loops of reflexes that you have. Then you have more sophisticated systems. And then you’re duplicating what we see in that visual cortex, starting to do those types of more advanced. And yet, we know from biology that it’s not all just a straight clunck, and then this layer, then this layer, then this layer, that communicate. And that richness adds to the complexity.

IRA FLATOW: I’m wondering, even going back to the original Westworld, back with Yule Brenner is one of the robots, I remember, when somebody got shot, and I remember somebody saying, when one of the guests got shot, said I’ve been wounded. And breaking that little divide between the robots being rebelling, and breaking Isaac Asimov’s, one of his great three rules about robotics, that the robots can’t attack their masters.

ROBIN MURPHY: Oh, dear, god, we’re not bringing up– OK, I’m sorry. I love Isaac Asimov, but the three laws of robotics were set up explicitly to sound good, but have all of these subtle ramifications that could power plots, because this it’s ambiguous. And so I always, as you can tell, cringe when people say, oh, robots should follow the three laws of robotics. Well, that means that they will inherently be screwed up, because they can’t follow them, it was set up to have those disconnects.

IRA FLATOW: Well, but people have– well, as you say, that was a long time ago. And certainly, the whole science fiction has followed past that and into the new Westworld and other places. But we’re going to take a break, and talk lots more about where robotics is today and consciousness. Please call us, our number, 844-724-8255. We promise not to bring up Isaac Asimov anymore in the conversation.

ROBIN MURPHY: I love Asimov, it’s just that–

IRA FLATOW: OK, well I had to. You know, he’s the father of that whole– we’ll talk about it more. Stay with us, we’ll be right back after this break.

I’m Ira Flatow, you’re listening to Science . We’re talking about robotics this hour with my guests Robin Murphy and Steve Ramirez. Our number, 844-724-8255. Robin and Steve, if we’re trying not to build human consciousness into robotics, how do we decide what the limit of the consciousness should be? And the differing kind of consciousness they should have? Let me ask Robin first.

ROBIN MURPHY: I don’t know that many of us in AI robotics think of it as building consciousness. I would think of it as that we think in terms of levels of initiative, what is the appropriate degree that we can delegate to a particular agent, autonomous agent? So what does it mean to just do exactly what I tell you to do? Is it OK for you to change the games or how you do it?

Or is it OK to come back and say, no, I can’t do this? Is it OK to come back and say, I’ve got recommendations? Or is it, hey, we’re giving you everything and complete ability to change up? We’re still at the lower levels of building initiatives into systems. But in each case, it’s always bounded. It’s in there somewhere that we programmed in and what those boundaries are. Just like when we give our directions to a kid or a coworker, we typically have bounds on those.

IRA FLATOW: Steve, do you share any of the real fear that we’ve heard some scientists and technologists fear that the AI robots can get smarter than we want them to and take us over?

STEVE RAMIREZ: Maybe this is me being an eternal optimist, but no, I think I have maybe only 1% of that worry. I think I agree with everything that Robin said, that there tends to be this tendency to kind of Hollywood-ify these things, and to say that once we have a handful of robots that can jump, or run, or self-driving cars, for instance, that they’re going to become self-aware, and then take over the world. And this is assuming that that’s their intention to begin with.

Whereas, I tend to think that these things actually just free up a lot of more time for us to go and do other jobs. And the self-driving cars is a good example, where it makes a handful of us a little bit uncomfortable being in the backseat of a self-driving car. But we used to think about that with things like Uber, and it would be crazy to say, 15 years ago, that you’re going to get into the backseat of a car with a stranger. And now it’s pretty routine, and you don’t really bat an eye.

So I think that in that case, the Hollywood style of thinking about this tends to be kind of a little bit overly grandiose, as opposed to we’re going to have robots that continue to assist us in ways, whether it’s through prosthetic limbs, or self-driving cars, or things. That rather than replace our humanity just happen to nuance it.

IRA FLATOW: 844-724-8255 is our number. Let’s go to Nathan in Philadelphia. Hi, Nathan.

NATHAN: Hello.

IRA FLATOW: Hi there.

NATHAN: I was wondering if there is a Bill of Rights for artificial consciousness? And if not, who would write it?

IRA FLATOW: Good question. Should we worry about robots? They’re machines. Are there ethics rights for them? No one’s–

ROBIN MURPHY: Sure.

IRA FLATOW: Very quiet on this. Robin–

ROBIN MURPHY: There’s a lot of ethics and there’s a lot of work, particularly in Europe, looking on to the rights for consciousness. And we saw David Hanson, Sophia robot was granted citizenship in Saudi Arabia. I’m going to flip it over a little bit. What the scary stuff to me personally is, what I’m seeing as a member of the board of Foundation for Responsible Robotics, and that is this rapidly expanding sex doll industry. It’s huge.

And there’s a serious debate in the social sciences over whether these very realistic sex bots have therapeutic value, which is with that. But there’s been no studies. So what we’re seeing, creepy child size sex bots. And so we’re seeing Europe starting to get into legislating that British. The UK just had a big law, set of arrests there. The United States, we’ve got some legislation pending.

Whereas Westworld was kind of focusing maybe robots need legal protection, because they’re intelligent, they deserve protection under the law, we also need to be thinking about more immediate stuff, like the sex bots.

IRA FLATOW: How about sex between consenting robots, of adult age? That OK?

ROBIN MURPHY: Sex between consenting robots is fine with me.

IRA FLATOW: OK.

STEVE RAMIREZ: Agreed.

IRA FLATOW: The reason I ask this, is years ago, when we first started covering artificial intelligence– I’m talking like 10 years ago or more, I interviewed some roboticists and people who were talking about their early robots, and they said to me, you don’t understand that the next big money making thing is going to be AI, sexual AI. And because that’s so big on the internet now, in terms of pornography, that no one’s going to stay away from that kind of issue. And it’s interesting, Robin, that you’re raising it now, because it seems to be important.

ROBIN MURPHY: It is. And I would encourage people to go to the foundation, the website for the Foundation for Responsible Robotics, and the study that they did for the European Union on our sexual future with robots. Really goes through both the AI aspect and what the investment is in that, and what are the unanswered questions about good thing, bad thing. But it is a thing, it is a thing, no doubt about that.

IRA FLATOW: One of the themes we see in the show, in Westworld, is the relationship of memories to the robot’s sense of self. They’re having strong memories. And they have multiple memories of different lives. And they have the ability to build new ones.

[AUDIO PLAYBACK]

– These memories, the girl, my daughter, I want you to remove them.

– I can’t, not without destroying you. Your memories are the first step to consciousness.

[PLAYBACK ENDS]

IRA FLATOW: Big concept there, Robin. Can we build realistic AI without something like memories.

ROBIN MURPHY: No, we can’t. And you start looking at what we expect in realism, for us to talk, we have a common ground, which is memories, how we’ve built up our understanding of the world. We also have a lot of built-in emotion.

One of the things that I disagreed with Westworld– which is kind of different than disagreeing with a scientific paper, I guess, but an academic word, disagree– this idea of emotions are so basic, they helped regulate those ladders of intelligence, those modules, in between there. So we don’t see how you can actually get to that, to get to conscious level, pure level consciousness, without memories, without this common ground, without scripts of how the world works.

I think one the most fun things about, from a programming standpoint, is that you’ve got a TV show about robots, which is following a script, screenplays. And that’s actually how we look at programming stereotypical actions, things that happen, events that happen very often. I think when you look at things like Google Duplex, you’re going off the fact that when you call to make an appointment, it’s very intentional. The person who’s taking your call and doing the appointment is very intentional.

There’s only a modicum of social convention of hi, how are you, whatever. Nobody really cares. Now we’re getting into the meat of what days, what times, all of that. These are things that we’re learning to build in as scripts, almost as if we’re writing little screenplays for our artificial intelligence systems.

IRA FLATOW: And speaking of that very point yourself, I have heard recently that there have been calls for having the robots on the other end of the phone line being identified as being robots instead of people. We want to know we’re talking to– is it a robot or is it a person. People are already now starting to ask for things like that.

ROBIN MURPHY: Well, I’m personally wondering about that, what the big furor is. Is it because if it’s a robot, they’re probably recording the conversation and using it for the database as yet another intrusion into our privacy. And we [INAUDIBLE] if it’s creepy. But for me, it’s sort of like a bored person from a call center versus a non-sentient robot. I mean, there’s just not a lot of difference, can I just get my appointment and be done with it.

IRA FLATOW: Steve, I know that you research memories. And I want to go back to that question, can we be conscious without memories? How important are the memories?

STEVE RAMIREZ: Yeah, in this case, I think it ends up being like you really can’t have perhaps one without the other, where they’re both intertwined. I mean, memories thread and unify our overall sense of being. So we see in cases, like in dementia like states, where you can be conscious at the moment, but your overall sense of identity over time loses that kind of common denominator that is memory.

And actually, that was one of the most fascinating things to me about how this is portrayed in Westworld, because when you look at characters like Dolores, or Maive, and having memories of their previous iterations start creeping back in, basically you can’t help but ask the question, well, do we forget?

Our iterations are basically like when we go to bed and we wake up in the morning, or perhaps who we were when we were two years old and three years old, which most of us can’t remember, whereas what we did yesterday, we remember. So I think it really gets at that fundamental question of are you the same as your previous iteration.

Except in Westworld, it’s displayed of course by fixing the host. But I thought it was interesting, because it also brings to light the question of, well, if those memories are there, then are there ways of actually trying to tinker with them and bring them back, even though they were once thought to be lost. And I think that that’s actually a very real scientifically tractable question.

IRA FLATOW: That is one of the really interesting aspects that they’re focusing on in Westworld. Let me get in a tweet from Mike, who says– first he says, I’ll be careful not to mention Asimov. He goes on to say, do you believe AI can eventually be more intelligent than humans using self learning algorithms? I feel like humans are approaching the plateau on how much we can actually do and process. Steve, what do you think?

ROBIN MURPHY: So what does–

IRA FLATOW: OK, Robin, you go first.

ROBIN MURPHY: Sorry, I mean, what does more intelligent mean? We already have AI systems that can outperform people on certain tasks. So are we talking about– are they like a robot, or an AI system like the old movie, Colossus, that comes over, and just says, OK, I’m taking over the world, and I’m going to end world hunger, and I’m going to end war, and I’m going to do all of this, you just do what I tell you to. Are we talking that kind of intelligence?

IRA FLATOW: Good question.

STEVE RAMIREZ: Yes, I think also– I completely agree, to second that, where I think it also depends on in what aspect and how do we define intelligence. Because I rely on my phone, for instance, to get from location A to B multiple times a day. And it can just figure out that the best path to get from work to a restaurant than I could have figured out, for instance. Or things like calculators, or once again, like self-driving cars, they’re probably going to be better than us at a lot of things.

But then we get another dimension of, well, if we’re trying to mimic human behavior, you could imagine a world where there’s AI, like in the newest Blade Runner, for instance, where the main character basically has an AI love interest, or like the movie, Her, and then you realize that it starts to blur the lines a little bit. But I think one of the things that the show does really well is bring those questions out. But I think that starting from the perspective of will they be more intelligent or smarter than us. And it’s just a matter of it depends at what.

IRA FLATOW: Yeah, I’m Ira Flatow, and this is Science Friday from WNYC Studios talking about robotics. In the original movie Westworld, we were clearly supposed to side with the humans. And in this series now, nearly all the relatable characters are robots. Clearly, I think sympathies have shifted. But why do you think that is, Steve? Why do you think we’ve moved over to the other side?

STEVE RAMIREZ: It ends up being a great vehicle for entertainment, because it flips the narrative on its head a little bit, where we’re used to– we side with the human because, of course, we’re humans. But in this case, again, when you look at the character of Bernard, and you can see how, well, basically had faked everybody to think that he was human and then ends up being a robot.

So I think it’s basically to force us to feel a little bit uncomfortable by being able to say, wow, I actually found myself relating to what I thought was human, which actually ended up being a robot. And in that way, it actually forces that conversation to begin, which I think the show does really well, whereas other shows might have been more clean and cut, we side with the humans, the robots take over, the humans win. I think in this case there’s more layers to the narrative, which I think just makes it a more enticing show.

IRA FLATOW: Yeah, you get to think about it a little more. Let me go to, see if I can get another phone call in before we have to go. This one’s in Pennsylvania. Hi, welcome to Science Friday.

JOHN: Hi, how are you?

IRA FLATOW: Hi. I didn’t get your name on the screen there.

JOHN: My name’s John.

IRA FLATOW: Hi, John, have you got a question for us?

JOHN: I do. Number one, I wanted to say that I am a big fan of Westworld, and I’ve been hooked on all this stuff since watching the [INAUDIBLE] project.

But there was an episode of Star Trek, The Next Generation, in which the computer started developing inside of itself neural networks, and using all the data that was in the computer about all the memories of all the people, and all the memories of everywhere they had gone, and all the things that they had explored, emerged a consciousness. A consciousness just suddenly emerged and created itself out of all this information and data. Does anyone speculate that anything of that sort could actually happen, or could ever happen with our own internet?

IRA FLATOW: Good question. are these paradigms of just walking bodies, are they obsolete already? I mean, is the internet where the next big robotic thing is going to happen. Robin, what do you think?

ROBIN MURPHY: We’ve certainly seen that in science fiction, speculating, and in the popular press, that we’re just going to have this emergent behavior, this self-awareness come out of that. Again, it really violates what we know about how we program with the bounded rationality, where we also have these explicit boundaries, these limits on initiative, how it would jump out. We don’t know how to write that code. We don’t foresee writing that kind of code.

IRA FLATOW: All right, that’s about all the time we have for now. I want to thank my guests, Robin Murphy, professor of computer science and engineering at Texas A&M University in College Station, Texas. She joins us by Skype. Steve Ramirez, assistant professor of neuroscience at Boston University. Thank you both for joining us today.

STEVE RAMIREZ: Thank you very much.

IRA FLATOW: I hope you’re enjoying–

ROBIN MURPHY: Thank you.

IRA FLATOW: I hope you’re enjoying Westworld and you think it’s good for robotics, no?

ROBIN MURPHY: It is? I’m sure they’re enjoying it.

IRA FLATOW: OK. That’s all the time we have. I want to thank our host today, 90.5, WESA in Pittsburgh, for hosting us, and to WESA’s John Sutton, Russ Lloyd, Tom Hurley, Helen Wigger, Terry O’Reilly, Nick Wright, for their help putting on the show and helping us put the show on from their radio station today. And I want to– oh, yeah, join us tomorrow night at the Carnegie Library Music Hall for a special live taping of the show. There are still a few tickets left at sciencefriday.com/Pittsburgh. We’re going to have a great live show from the Carnegie Library Music Hall. You can get a few tickets that are still left, sciencefriday.com/Pittsburgh.

Charles Bergquist is our director, senior producer Chris Itagliata. Our producers are Alexa Lim, Christie Taylor, Katie Hiler, technical engineering help from Rich Kim, Sara Fishman, and Jack Horowitz, back in New York. And we’re active all week on Facebook, Twitter, Instagram, all the social media. And you can now play us on those smart speakers. Ask it to play Science Friday whenever you want. So every day now is Science Friday. We have all the educational contact up on our website, content at our website. I’m Ira Flatow in Pittsburgh.

Copyright © 2018 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producer

About Christie Taylor

Christie Taylor was a producer for Science Friday. Her days involved diligent research, too many phone calls for an introvert, and asking scientists if they have any audio of that narwhal heartbeat.

Explore More