How Strong Is The Human-Robot Bond?
Undiscovered is a podcast from Science Friday and WNYC Studios about the left turns, missteps, and lucky breaks that make science happen. We tell the stories of the people behind the science, and the people affected by it. Listen to more episodes here.
If you were given a robot and asked to break it, would you do it? The amount of Furby destruction videos on Youtube suggest it wouldn’t be that hard. But that’s not true for all robots. According to researchers, knowing more about a robot or bonding with it can make you hesitant to harm it. And if the bond between you and a robot is strong enough, you might even go out of your way to protect it.
Kate Darling, robot ethicists from the MIT Media Lab, and Heather Knight, robotics researcher from Oregon State University, join Ira to talk about how we become attached to robots, and how this relationship can even influence our behavior.
Plus, our spinoff podcast, Undiscovered, is back! Hosts Elah Feder and Annie Minoff chat about the upcoming season, and give us a sneak preview of the first episode. Can’t wait? Listen to the trailer below.
And finally, we asked whether you have ever felt emotional attachment to a machine. If your answer is ‘Yes!’ you are far from alone. Check out some of the robo-love responses below.
I was super attached to the car I bought in college. She was like a friend who got me through five cross country moves and was always there for me, to share in my happiness or to cheer me up when I was feeling down. I cried when I couldn’t keep her anymore.
— Meredith (@StarrySkyKnits) September 6, 2018
I don’t know if Vicki feels the same, but I do love my robot vacuum! Vicki the vacuum is an amazing household helper! I will say excuse me to her if we cross paths while she’s out and about doing her thing. So, yes, I’d say I’m emotionally attached!
— Carolyn Steele (@MusicSteele) September 6, 2018
Scientists get very emotionally attached to our equipment. I know several people who are unwilling to be rude about their experimental setup whilst in the lab in case it hears them, gets offended and stops working.
— Dr Rachel Oliver (@DrRachelNitride) September 5, 2018
The charging port on one of our tablets broke. As the battery got low, it kept chirping asking us to charge it but we couldn’t. It was surprisingly sad and we had to hide the tablet in a box until it died because we couldn’t listen.
— Peter Wayner (@peterwayner) September 6, 2018
I came home to find the robot vacuum cleaner tangled in a rug whining for help… unconsciously I said “poor baby, I will get that”… I felt stupid immediately. 😀
— Sidney Monteiro (@SidneyMonteiro) September 6, 2018
My old Hobart-made Kitchenaid mixer is named Betty, and she’s tough as nails. I like to give her a pat for a job well done.
— SpringPeeper (@RachelsBirds) September 6, 2018
On feeling empathy for robots.
Kate Darling: In a study, we were interested in two things. First, we were interested in whether people would hesitate more to hit [a robot] if we introduced it with a name and a backstory. So if we said, this is Frank. And Frank’s favorite color is red. And he likes to play.
And then the second thing we wanted to know was whether people’s hesitation to hit the robot correlated to their natural tendencies for empathy. So we did this psychological empathy test with people. And they would hesitate more when there was this name and this story around the Hexbug. So it was just a little experiment, but it was really interesting because we think it indicates that there may be a link between how empathic you are as a person and how you’re willing to treat a robot, even if you know that it’s just a machine.
On why we’re often unnerved by robots that look a little too human.
Kate Darling: For me, it has a lot to do with expectation management. If you have a robot that looks too close to something that you’re intimately familiar with, like a person—or it could also be you have a robot that’s supposed to look like a cat, and it doesn’t behave exactly the way you expect it to. Then that kind of creeps people out because it feels off. So I feel like the uncanny valley is a little bit about managing people’s expectations. And a lot of successful design in robotics tries to gravitate more towards shapes and forms that we relate to, but that aren’t trying to just mimic something that we’re very close to; you’re much more willing to suspend your disbelief and interact with it as though it were its own thing.
Heather Knight: People are really good at filling in the blanks. And so sometimes, if you give them more abstract forms, it’s like a Noh theater mask. There’s a form of Japanese theater where they don’t have very much expression. And so then you can read these with all kinds of complexity.
On why robotics seems to develop more readily in certain countries like Japan over others.
Heather Knight: Korea is huge right now as well. So I think it’s religion. If you look at monotheistic faiths, there is storytelling where you’re usurping the role of God to try to make a lifelike creature. Whereas in the more Shinto cultures, everything wants to work in harmony, and there isn’t a hierarchy of creation between mountains and animals and humanity and robots.
Annie Minoff is a producer for The Journal from Gimlet and the Wall Street Journal, and a former co-host and producer of Undiscovered. She also plays the banjo.
Elah Feder is a development producer for Science Friday. She co-hosted and produced the Undiscovered podcast. She’s also Science Friday’s resident Canadian.
Kate Darling is a robot ethicist in the MIT Media Lab, Cambridge, Massachusetts.
Heather Knight is a robotics researcher at Oregon State University in Corvallis, Oregon.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. If you are a fan of Sci Fri, you might also listen to our science documentary podcast Undiscovered, hosted by our very own Annie Minoff and Elah Feder.
And I’m happy to announce that after months of brandishing their microphones on the road, hunkering down in our recording studio, Annie and Elah are back with season 2, 10 weeks of stellar episodes rolling out this fall wherever you get your podcasts. And to give us a little preview of what they’ve got in store, Annie and Elah are here with us, the co-hosts and producers of Undiscovered from Science Friday and WNYC Studios. Welcome back.
ANNIE MINOFF: Hey.
ELAH FEDER: Thank you.
ANNIE MINOFF: Thanks for having us.
IRA FLATOW: So what’s in store? What have we got in store this season?
ELAH FEDER: OK, so this is Elah speaking, by the way.
Yeah, we’ve been told we sound alike.
ANNIE MINOFF: A little bit.
ELAH FEDER: So we are coming back, as you said, for the next 10 weeks, every Tuesday. And we have all kinds of stories this season. We have an episode about what killed the dinosaurs. It’s really about a scientist who has been saying that we’ve got this all wrong. She’s been telling other scientists for about 30 years that they have this all messed up, and she’s going to set them straight.
ANNIE MINOFF: They love this, obviously.
IRA FLATOW: Yeah, you would have to.
ELAH FEDER: So it’s about dissent in science. We have another episode about a tree from Australia that is doing very well in California. You might– the eucalyptus tree.
ANNIE MINOFF: A certain minty smell, yeah.
IRA FLATOW: Oh, yeah, it’s everywhere.
ELAH FEDER: So the episode is really about the fight against non-native invasive species and how much of that is based on science and how much of that is based on the sense that these species are outsiders who don’t belong here.
ANNIE MINOFF: And we’re actually following a story that is still unfolding right now about political gerrymandering in North Carolina and the role that math might actually play in the next few years in redrawing a whole lot of voting district lines.
IRA FLATOW: Mm. Let’s talk about your first episode next Tuesday.
ANNIE MINOFF: Yeah.
IRA FLATOW: It’s about robots, right.
ANNIE MINOFF: It is indeed about robots. And it was actually inspired by something I heard on Science Friday. And you had had a science fiction writer on the show, Daniel Wilson.
IRA FLATOW: Oh, yeah.
ANNIE MINOFF: Yeah, he mentioned, kind of in passing, this video that some robotics researchers had made in the ’90s of this experiment that they had done. And it’s quite the video. It opens up on kind of a nondescript lab. You see a man and a robot, the man at his computer, robot– Daniel describes it as looking kind of like a trash can, which I think is quite accurate. And the robot then asks the man a question.
XAVIER: Hello. I’m Xavier. Shall I purchase a cup of coffee for you?
MAN: Yes, please.
XAVIER: All right. Please wait for a while.
[ROBOT SQUEAKING AND WHIRRING]
ANNIE MINOFF: All right, so Xavier, the coffee-fetching robot has his job. He is going to wheel down to a cafe, get in line, and then order a cup of coffee. And of course, the experimenters want to know, have we programmed this robot to fulfill this everyday task? Can it do it?
And so you see him roll into the coffee shop, kind of sidle into line and make his way to the front, and order the coffee. And he’s scores 100. A-plus to Xavier. The experiment is a success. But what’s really interesting about this video is not so much what you see Xavier, the robot, doing so much as how everyone else in this cafe is reacting to the presence of a coffee-buying, line-standing robot.
ELAH FEDER: So there’s this one particular moment where a man walks into the lobby, sees this robot standing in line to get coffee. And you kind of see him hesitate for a minute, scratch his head, looks at the robot. And you almost imagine him thinking, like, OK, do I have to get in line behind this robot?
ANNIE MINOFF: Who you would assume is not drinking the coffee, so maybe that’s fine.
ELAH FEDER: Or is it rude to cut in front of a robot? Like, how do you treat a robot? What is the etiquette?
ANNIE MINOFF: Yeah. So for us, this little vignette really inspired a question, which is, when robots start acting in ways that are very human-like, how do we treat those robots? How do we feel about them?
IRA FLATOW: So did you come up with the answer to that?
ELAH FEDER: We solved it.
ANNIE MINOFF: I don’t know if we came up with the answer. But we looked to an experiment. Some people had tried to figure this out about 10 years ago, this group of Seattle psychologists.
And they did a very intriguing experiment where they introduced a very cute humanoid robot named Robovie to a group of kids and teenagers. And the experiment starts out super simple. You start out with an introduction between Robovie, this robot, and Eric, a teenager.
MAN: Robovie, meet Eric.
ROBOVIE: Hi, Eric. It is very nice to meet you. Will you shake my hand?
ELAH FEDER: Weird voice. But that is, “Hi, Eric. Will you shake my hand?”
ANNIE MINOFF: This is real footage from this experiment. And you see this generic office space. And standing inside the door is this 15-year-old boy with a buzz cut, shaking hands with a robot.
ROBOVIE: How are you today?
ERIC: I’m good. How are you?
ROBOVIE: I am doing well.
ERIC: That’s good.
ROBOVIE: Thank you for asking.
ERIC: You’re welcome.
ELAH FEDER: That’s just a little clip from the episode. I love that “s’up.”
OK, so you have this scene– robot meets teenager. They have some light chit-chat, play a game of I Spy, which actually, for a teenager, seems a little young. But anyway, it all seems pretty innocuous until the researchers intervene. And they do something not super nice to the robot.
ANNIE MINOFF: Yep.
ELAH FEDER: We’re not going to tell you what that is.
IRA FLATOW: Oh!
ELAH FEDER: I’m sorry.
IRA FLATOW: No spoilers!
ANNIE MINOFF: Yeah. Yeah. Every time Elah sees this scene, she’s like, ugh, no, don’t do it!
ELAH FEDER: Like I didn’t know it was coming.
ANNIE MINOFF: A very strong reaction.
ELAH FEDER: Anyway, basically, the robot is wronged in a particular way. And the question the researchers have is, how is this teenager– or a kid in some cases. How are they going to react? Like, now that they are friends, they’ve been friendly with this thing, how do they treat it? Like, how much empathy should they have for it?
ANNIE MINOFF: Mhm.
IRA FLATOW: Yeah. And that opens the bigger question to all the listeners, how they would act themselves–
ANNIE MINOFF: Yeah.
IRA FLATOW: –in such a situation.
ELAH FEDER: How empathic are you with your robots?
IRA FLATOW: Well, we’re going to have to listen and find out, I guess.
ANNIE MINOFF: That’s exactly true. And that particular episode is coming out next Tuesday, so you’re going to know really soon.
ELAH FEDER: Yeah. So if you listen, you can get Undiscovered anywhere that you get your podcasts. So just type in “Undiscovered” in the Search box, [INAUDIBLE] hit Subscribe. And you can hear this episode next Tuesday.
ANNIE MINOFF: Yeah.
IRA FLATOW: I can’t wait. That sounds great.
ANNIE MINOFF: Yeah.
IRA FLATOW: Elah Feder and Annie Minoff are the co-hosts and producers of our Undiscovered podcast. And they’ve got– besides the robot one, there’s a whole new season to share with you. So subscribe wherever you get your podcasts. Thanks, guys.
ANNIE MINOFF: Hey, thanks for having us.
IRA FLATOW: You know, when I heard about the new episode of Undiscovered, it reminded me of a classic episode of The Twilight Zone. Maybe you remember this one. In this episode, we see a family being taken care of by maids and butlers. OK, big deal.
But even though the household staff look like people, they’re actually robots. Dr. Loren, the head of the household, built them to cook and clean and even light his pipe for him. His daughter hated how dependent he and his wife had become on the robots. She wanted him to destroy them. But Dr. Loren wasn’t having any of it.
They’re not just machines. Do you know how many thousands of hours I’ve spent in developing them and perfecting them? Do you realize how marvelously intricate they are, how scientifically precise? Not just arms and legs that move, Jana. They’re creatures.
IRA FLATOW: Yeah. And he’s telling that to his daughter. Spoiler alert. Turns out his daughter is also a robot. She doesn’t know it, though.
And in this episode– this is 60 years ago, this episode of The Twilight Zone. And science fiction writers have been exploring this idea for over 60 years– how we feel about robots in our midst. So tell us, out there, have you ever felt emotionally attached to a machine or a robot? Even if you know a machine isn’t alive, can you not help but treat it like it is?
Our number, 844-724-8255. Join in that conversation. You can also tweet us, @scifri. And that’s what we’re going to be talking about– why we become attached to some robots and not to other ones.
Let me introduce my guests. Kate Darling is a robot ethicist at the MIT Media Lab in Cambridge. Welcome to Science Friday.
KATE DARLING: Thanks for having me.
IRA FLATOW: You’re welcome. Heather Knight is a robotic researcher from Oregon State University in Corvallis. Welcome to Science Friday.
HEATHER KNIGHT: Thank you. Greetings.
IRA FLATOW: Greetings. This feeling we have for robots, Heather, is it just for ones that look human?
HEATHER KNIGHT: No, not at all. It’s for anything that behaves in a way that we can understand. Like, it has interactivity.
IRA FLATOW: Even the one that’s carpet-sweeping our floor?
HEATHER KNIGHT: Yeah, absolutely. People name them. I had a coworker once that came home to a knocked over plant and a guilty a little robot in a pile of dirt that looked, he thought– it was looking up at him with puppy dog eyes. I’m sorry, Daddy.
IRA FLATOW: That is cool. And, Kate, you’ve held workshops to see if people would destroy these little bug robots. Tell us about that experiment.
KATE DARLING: Yeah. Well, actually, we did workshops with these really cute baby dinosaur robots. And then we did some actual experiments with the bug robots. And the cute baby dinosaur ones were really dramatic, where we gave people these robots that are about the size of a cat, that have these eyes and make these really expressive movements and sounds, almost like pets or almost like a baby.
And the thing that’s really cool about them is that they know when you’re hitting them a little bit too hard. And they can also sense where they are in space. So if you’re holding them upside down, for example, they’ll start to cry.
And so we had people name them and play with them and do these activities with them. And then we asked them to torture and kill them. And it was very dramatic, like I just mentioned. People really were refusing to even hit the robots. And we had to kind of force them to even destroy them in the end.
So that inspired some research that I did later on with Hexbugs, which are this little toy that people who have kids might be familiar with. It’s a little, like, toothbrush head sized toy that moves around like a little bug. So it has this very lifelike movement. And we had people come into the lab and smash them with mallets.
And we were interested in two things. So first of all, we were interested in whether people would hesitate more to hit this thing if we introduced it with a name and a backstory. So if we said, this is Frank. And Frank’s favorite color is red. And he likes to play. And we kind of personified the robot a little bit.
And then the second thing we wanted to know was whether people’s hesitation to hit the robot correlated to their natural tendencies for empathy. So we did this psychological empathy test with people. And we found that people who have low empathic concern for other people, they would hesitate more to hit these Hexbugs. And they would hesitate particularly more when there was this name and this story around the Hexbug. So it was just a little experiment, but it was really interesting because we think it indicates that there may be a link between how empathic you are as a person and how you’re willing to treat a robot, even if you know that it’s just a machine.
IRA FLATOW: Wow. And even if your robot looked like a cockroach? Which you would not think twice about stepping on, perhaps.
KATE DARLING: Exactly. I mean, yeah. So this is, in part, why we were using Hexbugs and not, like, a really cute robot. Because we were like, OK, if we can find an effect with even this thing that resembles a cockroach, then that is maybe meaningful.
IRA FLATOW: You must have been surprised by this, I would imagine. Or maybe you weren’t
KATE DARLING: Oh, yeah.
IRA FLATOW: Yeah.
KATE DARLING: Well, I mean, I knew from observing people and even observing my own behavior around robots– that was probably the most surprising observation of all– to realize that I felt empathy for robots in my life, even though I knew exactly how they worked. So I was expecting to find something there. Also because there’s a lot of other research.
Heather, who’s on this show, has done a ton of fantastic research in this area. There’s a whole body of research in human-to-robot interaction that shows how people respond to robots. But it was really interesting to do these workshops and these experiments and actually see it in numbers.
IRA FLATOW: This is Science Friday from WNYC Studios. I’m Ira Flatow with Heather Knight and Kate Darling. Heather, how do you react to this? What’s your experience with robots and people?
HEATHER KNIGHT: Yeah. So I’ve been doing robotics research for 17 years now. I started as a freshman at MIT. And it’s funny that, even knowing what I know about the programming of the robot, sometimes it’s like I fall into it as well.
So I am a professor at Oregon State right now. And we are developing a robot called Resolution Bot. It runs for the first couple weeks of the year and then sort of peters out. But it’s meant to try to help people keep their health and fitness plans.
And so it goes around and visits us. And so the first year, it was a remote-controlled study. And we’re doing increasing levels of autonomy every year.
So anyway, 17 years in, I have this robot visiting me every couple of days, just checking in on my health. And one day, it sort of trips on the edge of the carpet and falls over. And I just run over.
And I’m like, what is that? It says, “Help!” And I go, and I pick up the robot. And this is, again, one of those Xavier-style, like, little trash can robots, maybe a little small trash can. So all it is just sort of a circle that is carrying a basket of fruit and has, like, these little smiley face buttons on it.
And I look over the awning at my students. And I’m like, is the Resolution Bot OK? And they’re saying, I don’t think we can restart its localization right now.
And so I, like, pick up Resolution Bot like a toddler. I carry it down the stairs. And I bring it to my students so that they can, like, fix the software.
And in the moment, people were like, oh, that’s kind of funny. It fell over. Did you think about taking a picture? They’re like, no, I was worried about it.
So if the researchers aren’t immune to it, of course general people will be–
IRA FLATOW: It almost sounds like people– and maybe your study showed this– have more empathy for the robots than they might have for people.
HEATHER KNIGHT: I’m not sure about that. I mean, I don’t think robots can convey, like, the same complexity of a character that a person does. And so we definitely see the holes sometimes. But, yeah, I think, if it’s not too much trouble, it’s easy to have empathy or help a robot in ways that aren’t too big.
I mean, they can definitely cross the line. If they just keep asking and asking for favors and never giving anything back, people will stop helping them. So there’s some cool work by Stephanie Rosenthal at Carnegie Mellon University that shows people will not help robots forever.
IRA FLATOW: Can you design a robot that you don’t want to feel empathetic for? For example, if you have a military robot that you know is going to investigate bombs or something, you know it might get blown up sometime.
HEATHER KNIGHT: Yeah, that’s a great question. I think that that’s something that– for active research, how you can make people think of something as more of a machine rather than a social character. I can think– other examples where people would want that is if there was a robot assisting the elderly, where you had to change, and people are less comfortable doing stuff like that in front of a social agent– or helping you go to the bathroom.
So there’s times, for safety and also for just kind of where you actually want privacy, where it would be nice to have a less social machine. But it’s hard to do. It’s easier to make it social.
IRA FLATOW: Kate, do you agree?
KATE DARLING: Oh, absolutely. I think that we’re biologically hardwired to really respond to robots as this physical thing that moves autonomously in our space. And I think that’s really difficult to turn off.
We were talking about the Roomba earlier, the vacuum-cleaner robot that people will name and treat like a pet. And the company says that when people send their Roomba in to get repaired, they’ll often ask for the same one back and not a different one. And if people are doing that with, like, a disk that just moves around your floor and doesn’t have eyes or anything, it’s a huge design challenge to be able to create a robot that moves autonomously that people won’t somehow treat like it’s alive.
IRA FLATOW: We have so many tweets coming in. I’ll read most of them. But I’ll go to the break saying– from Sydney. He said, “I came home to find the robot vacuum cleaner tangled in a rug, whining for help. Unconsciously, I said, ‘Poor baby. I will get that.’ I felt stupid immediately.”
We’ll talk about a lot more tweets and your calls– 844-724-8255. Talk about robots in your life with Heather Knight and Kate Darling. We’ll be right back after this break. Stay with us.
This is Science Friday. I’m Ira Flatow. We’re talking this hour about why we feel compassion for some robots and get maybe creeped out by others with my guests Kate Darling, robot ethicist at MIT Media Lab, Heather Knight, robotics researcher from Oregon State University. Our number 844–724-8255.
We have so many people want to talk about robots. They’ve been part of our culture. Remember Robby the Robot from Forbidden Planet, also Lost in Space? And you had Rosie the Robot from The Jetsons. Robots have been with us ever since the invention of the word from a Czech play years ago.
Let’s go to the phones. That’s a good place to go. Let’s pull my phone over close to me so I can–
Have to be able to reach it. Here we go. Let’s go to Christine in Cincinnati. Hi, Christine.
CHRISTINE: Hi. Your conversation reminds me of our alarm clock we’ve had in our family. And I have a big family. And now it’s on my youngest child. And it’s not even functioning properly, but she doesn’t want to give it up.
It’s a cute little teacup with a little cat on the top. And it’s just this beautiful piece of art.
IRA FLATOW: Oh, yeah.
CHRISTINE: Even when it’s not working anymore, I think I’m going to keep it.
IRA FLATOW: Why do you feel such a great attachment to it? What is it about it?
CHRISTINE: Well, I have eight kids. There’s a certain time in their development where it’s like, OK, it’s up to you to get yourself up out of bed. You don’t have Mommy coming along and coddling you.
And this thing makes a good sound when it goes off. And it has been functioning, even though one of the little lights has started to not burn brightly when it– and that indicates that the alarm is on. So I’m always going in there– still coddling– to check if that alarm is on, because the light’s off.
IRA FLATOW: I get it. I see the attachment that you have, and good luck with that. It sounds familiar, Kate and Heather, you know?
HEATHER KNIGHT: Mhm.
IRA FLATOW: People are attached. We have so many tweets. Let me go through some of the tweets.
“I don’t know if Vicki feels the same, but I do love my robot vacuum.” We keep hearing about– “Vicki is an amazing household helper.” That’s Vicki the Vacuum.
And Dr. Rachel says, “Scientists get very emotionally attached to our equipment. I know several people who are unwilling to be rude about their experimental setup while in the lab in case it hears them, gets offended, and stops working.” Peter says, “The charging port on one of our tablets broke. And as the battery got low, it kept chirping. So I had to actually get it out of the room because I couldn’t listen to it anymore and felt very bad.”
And it goes on and on. Peter says, “I consciously make a point to say ‘please’ and ‘thank you’ to Google Assistant, even though I have no idea why.” Consciously, not unconsciously. All sound familiar to you?
KATE DARLING: Oh, yep.
HEATHER KNIGHT: Yeah, it’s pretty interesting. I love the story of the alarm clock because it makes me think about one of the roles of technology or of robotics in our lives is increasing personal autonomy. So you had the mother that had to wake up her children. And now the child can, with the help of this device, be independent. And I think that’s a great role for robots is sort of partnering with a person to help them achieve something that would be harder for them to do by themselves.
IRA FLATOW: Sometimes we hear that– people try to make robots look like people. And there’s something called the uncanny valley where robots can get kind of creepy. Why do we feel uncomfortable at that point about them?
HEATHER KNIGHT: Mm, it’s kind–
KATE DARLING: Well, there is some different–
HEATHER KNIGHT: Yeah.
KATE DARLING: Oh.
HEATHER KNIGHT: Go for it, Kate.
IRA FLATOW: Kate.
KATE DARLING: I think there are a bunch of different theories for why this happens. And I’m sure you have thoughts on this as well. For me, it has a lot to do with expectation management.
So if you have a robot that looks too close to something that you’re intimately familiar with, like a person– or it could also be you have a robot that’s supposed to look like a cat, and it doesn’t behave exactly the way you expect it to. Then that kind of creeps people out because it feels off. So I feel like the uncanny valley is a little bit about managing people’s expectations.
And a lot of successful design in robotics tries to gravitate more towards shapes and forms that we relate to but that aren’t trying to just mimic something that we’re very close to. So I mentioned earlier the baby dinosaur robots that we had in this workshop. The brilliant thing about those is that people have never actually interacted with a dinosaur. So instead of trying to be like the perfect dog or cat, it just tries to be like this cartoonish dinosaur that you’re much more willing to kind of suspend your disbelief for and interact with as though it were its own thing.
IRA FLATOW: Mhm. Let’s go to Chicago with Ben on the phone. Hi. Welcome to Science Friday.
BEN: Hi there.
IRA FLATOW: Hi there. Go ahead.
BEN: It’s been my experience that people tend to anthropomorphize everything, even other people. To the extent that you don’t get feedback to contradict your assumption, you’ll plug in your own values to finish the picture. And in the case of robots, I would think we would tend to do the same thing. And in that case, it seems like the less feedback they give us, the more we will project ourselves onto them and the more attached we will become.
IRA FLATOW: Kate, what do you think of that? Thanks for the call.
KATE DARLING: Yeah, I actually think that that’s somewhat true in that– the design principle that I just mentioned of not trying to completely, like, recreate a human face, for example. You could just recreate aspects of a human face, like eyes, or maybe certain– actually, Heather would have thoughts on this as well. But sometimes less is more in creating designs that people will relate to, because like you say, they will project themselves onto it, and they will fill in the blanks with their imagination.
HEATHER KNIGHT: Yeah, totally. I mean, I totally agree that people are really good at filling in the blanks. And so sometimes, if you give them more abstract forms, then they can kind of– it’s like a Noh theater mask. There’s a form of Japanese theater where they don’t have very much expression. And so then you can read these– all kinds of complexity.
One of the things I really liked about robotics is how international and multicultural it is. So just to offer a different perspective, Hiroshi Ishiguro talks about having an ultra humanoid-like robot means that we can then start bringing a robot to a fancy restaurant. It’s totally not cool to Skype with your wife at a fancy restaurant. On the other hand, if you brought an avatar that is very human-like that she could log in to– he talked about how he wanted to just be able to take her on a date when he was traveling.
IRA FLATOW: Speaking of international aspects, we always see so many Japanese robots. They always seem to be in the forefront of robotics– looking like people, doing things, being accepted in culture. Is there something about Japanese culture that makes them more accepting?
HEATHER KNIGHT: Korea is huge right now as well. So I think it’s religion. If you look at monotheistic faiths, like the faiths– there is storytelling where you’re usurping the role of God to try to make a lifelike creature. Whereas in the more Shinto cultures, everything wants to work in harmony, and there isn’t a hierarchy of creation between mountains and animals and humanity and robots.
IRA FLATOW: Mhm. Let’s go to the phones. Lots of people want to get in on the conversation. Anna in Boston, hi. Welcome to Science Friday.
IRA FLATOW: Hi there.
ANNA: So I was actually wanting to make comment– something similar to the last caller, I suppose, where we’ve been so exposed over time, from Asimov’s Robby the Robot to Data in Star Trek. It seems like humans are predisposed to sort of project emotions onto robots and kind of bring them into our family and familiarize ourselves with that. I guess, how much are we just prone to do that ourselves? How much of it is influence from the media? Especially when Asimov was consciously doing that, trying to turn around the Frankenstein conflict that Shelley had created with that God fooling with the creation of man idea.
IRA FLATOW: All right. Thanks for those comments. Heather, what do you think?
HEATHER KNIGHT: Yeah, I think storytelling is incredibly powerful. I’ve never met an engineer that wasn’t influenced by the science fiction that they had read and created. That being said, I think that we learn a lot about what’s fundamental to being human in sort of thinking about how we anthropomorphize these machines. Like, even for other types of animals, this idea of being able to rapidly identify what’s a predator comes first, followed by, are you a member of my tribe? Are you a potential romantic partner?
So we do these things very rapidly, kind of at the snap of our fingers. And we don’t have that much control over it. So it’s sort of about recognizing what people do so that we can design the technology to best be able to achieve what it’s trying to get done.
IRA FLATOW: Heather, don’t you have a robot called Data that is named after Star Trek?
HEATHER KNIGHT: Yeah. Yeah. So I created a robot stand-up comic. It’s a NAO robot that I named Data, maybe seven years ago now. So we’ve gotten to perform a bunch.
IRA FLATOW: Is Data like a person to you?
HEATHER KNIGHT: It’s funny. I think the best metaphor is, like, someone that writes a novel and the characters start coming alive. So I definitely help this robot write its jokes. And it becomes sort of– it’s self-deprecating comedy that I don’t perform myself.
IRA FLATOW: Let’s go to Charles in Idaho Falls, Idaho. Hi, Charles.
IRA FLATOW: Go ahead.
CHARLES: I recently narrated an audio book by Gary Starta, called What Are You Made Of?, that talks a lot about, basically, the potential of eventual discrimination against androids. Like, the premise is that, hundreds of years in the future, the creation of androids has actually been outlawed, that it’s illegal to create a robot that looks like a human, because humans will then think of it as a human. And anyway, it’s an interesting book. But I guess it’s a matter of do you think we’ll get to that point where some right-wing organization or whatever might say, OK, we can’t do androids because it’s– I don’t know– discrimination?
IRA FLATOW: All right. Yeah, give them rights, in other words. What do you think, Kate?
KATE DARLING: Well, yeah. So I think we’re already seeing some hints of that. I don’t know if you’ve heard of the efforts to ban sex robots. But there have been some protests against some new sex technology that is very human-like. And people have various arguments for why they’re not comfortable and, like, what societal effects it might have to have very humanoid robots of that sort. So I absolutely think that there will be a lot of societal conversation as robots enter into more shared spaces and we see more design that is very lifelike and that we start treating as lifelike.
IRA FLATOW: On the other hand, people talk about the dystopian societies where robots rebel– the I, Robot movie, things like that. Or they’re just still not trusting enough for a robot to self-drive their car for them yet.
KATE DARLING: Hm.
HEATHER KNIGHT: Mhm.
KATE DARLING: I think–
HEATHER KNIGHT: Caution is advised.
KATE DARLING: –there’s a lot of fear.
IRA FLATOW: “Caution is advised,” you say? All right. Let me just remind everybody this is Science Friday from WNYC Studios. Are you in agreement with some of our great thinkers in technology that say you should fear the upcoming intelligence of robots?
HEATHER KNIGHT: I think it’s as dangerous to just be a pure techno-optimist as it is to be a pure techno-pessimist. I think proceeding with caution is the only way to go ahead. I think the autonomous cars, that’s really difficult for machines to have the same kind of perception as people. And so we should proceed with caution.
That being said, we shouldn’t be blind in our fear either. It’s important to consider both sides. I think that, like, machines causing destruction is more about the people behind the machines. And generally, even in storytelling, if you’re a good parent to the technology, generally, it grows up to not be a sociopath.
IRA FLATOW: Mm. During the history– I’ve been following science for many decades. And there have been times in biology, whether it’s genetic engineering or other kinds of engineering, where scientists have said, hey, we ought to stop and think about where we’re headed and talk about this before it gets out of control. Do you think we will reach a time with that in robotics? Or have we reached that time?
HEATHER KNIGHT: Yeah. Kate, definitely tell them about We Robot.
KATE DARLING: Yeah. So there is a conference that started seven years ago that is about exactly this. And seven years ago, people said we were crazy for wanting to talk about robots and the societal issues in robotics.
And now it’s becoming, I think, a national and even international conversation that’s getting a lot of attention. And I think that, right now, people are seeing robotic technology enter into public spaces, transportation systems, workplaces. And I think that it’s becoming more of a conversation that’s starting to happen.
IRA FLATOW: Is it going to become a political conversation?
HEATHER KNIGHT: Oh, for sure. Yes. And it has to.
I personally am not worried about robots taking over the world and killing us all, like some people are. But I do worry about some of the more near-term issues with privacy and data security, and autonomous vehicles, and the way that we integrate robots into workplaces. I think there are a ton of societal issues that we need to be dealing with now. And we need consumer protection agencies to be aware of the technology. I think it has to be a political conversation, and it’s never too early to start.
IRA FLATOW: Well, we’re starting it today, I hope. Can robots pressure us into thinking? Along those lines– thinking in a certain way, or change our beliefs or behaviors?
HEATHER KNIGHT: Can robots help us meet our own goals? I mean, I love that you’re asking a question that isn’t just about utility in robotics, because I think, as soon as you put robots close to people, then you can start looking at what are human needs?
IRA FLATOW: Hm. So what is the answer to my question then?
HEATHER KNIGHT: So in certain things, like coaching, I think that that’s something– we often know what we’re supposed to do. It’s just really difficult to do it without a friend. So sometimes robots can help us do that.
There’s also this idea of a robot in a triad, where a robot can help two people sort of enter into conversation or connect that might not know that they otherwise had things in common. So I think, in that respect, sometimes it can help start conversations or keep it going. I think it’s difficult for robots to mastermind societal change.
IRA FLATOW: But if a robot says something, will people more tend to believe it than a person?
HEATHER KNIGHT: Oh, unfortunately, yes.
That’s not necessarily a good thing, though.
IRA FLATOW: And that’s part of the big discussion you say we need to have.
HEATHER KNIGHT: Mm. Yeah. Yeah.
So for example, like a medical diagnosis by a machine– whether it’s software or whether it’s actually a robot– is sometimes given more weight because it’s presumed that it’s based on calculations. Of course, calculations come from programmers and numbers and people. So it’s not actually more valid. And there have been cases in history where, for example, an x-ray machine was miscalibrated to be like 10 times as strong. And people trusted the machine over the patients.
IRA FLATOW: On the other hand, I’ve seen the recent research on AI which shows that AI was better at diagnosing than doctors who brought their own biases. That’s another topic for another day.
HEATHER KNIGHT: Collaborative future, that’s the future.
IRA FLATOW: Kate Darling, robot ethicist at MIT Lab, Heather Knight, robot researcher from Oregon State University, thank you for taking time to be with us today.
HEATHER KNIGHT: Our pleasure.
KATE DARLING: Thank you.
IRA FLATOW: And just a reminder, if you liked that robot discussion today, check out the new season of our documentary podcast Undiscovered. First episode is all about our squishy feelings for robots. Search for Undiscovered wherever you get your podcasts. Subscribe to make sure you hear that.