10/26/2018

Self-Driving Cars Are Bringing The Trolley Problem Into The Real World

12:11 minutes

two trolleys going down a street, with a man dashing out in front of one in a way reminiscent of the trolley problem
Trolleys in the center of Prague, Czechia. Credit: Martyn Jandula, via Shutterstock.

If you’re a casual student of ethics—or just even just a fan of the television show The Good Place—you’ve most likely heard of the trolley problem. It goes like this: A runaway trolley is on course to kill five people working further down the track—unless you pull a lever to switch the trolley to a different track, where only one person will be killed.

The trolley problem is designed to be moral thought experiment, but it could get very real in the very near future. This time, it won’t be a human at the controls, but your autonomous vehicle. The United Nations recently passed a resolution that supports the mass adoption of autonomous vehicles, which will make it more likely that a driverless car might cross your path (or your intersection). Who should an autonomous vehicle save in the event that something goes wrong? Passengers? Pedestrians? Old people? Young people? A pregnant women? A homeless person? Sohan Dsouza, research assistant with MIT’s Media Lab, discovered that the way we answer that question depends on the culture we come from. He joins Ira to discuss how different cultural perspectives on the trolley problem could make designing an ethical autonomous vehicle a lot more challenging.

don't leave us in the dark! donate now (button)


Further Reading

Segment Guests

Sohan Dsouza

Sohan Dsouza is a research assistant in the MIT Media Lab in Cambridge, Massachusetts.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow. Even if you’re just a casual student of ethics or just a fan of the TV show, The Good Place, you’ve most likely heard of the trolley problem– a runaway trolley. Here it goes. It’s on course to kill five people working down the track, unless you pull a lever to switch the trolley to a different track where only one person would be killed. Do you intervene to kill the innocent bystander?

[AUDIO PLAYBACK]

– Michael, what did you do?

– I made the trolley problem real, so we could see how the ethics would actually play out. There are five workers on this track and one over there. Here are the levers to switch the tracks. Make a choice.

– The thing is– I mean, ethically speaking–

– No time, dude. Make a decision. Well, it’s tricky. I mean, on the one hand if you ascribe to a purely utilitarian world view.

[CRASH]

[END PLAYBACK]

IRA FLATOW: Yeah, that was a segment from The Good Place. And you can see it’s one thing to imagine the trolley problem with a human at the controls, but what about the driverless car, which controlled by a computer? Autonomous vehicles are set to take over the road in the not too distant future. The UN recently passed a resolution that supports their mass adoption and that will put the decision of whom to save and whom to kill in the hands of a machine.

Who should the car decide to protect? The passengers, the pedestrians, older people, younger people, a pregnant woman, a homeless person? My next guest discovered that how we answer that question depends on the culture we come from. And that could make designing an ethical autonomous vehicle a lot more challenging. Sohan Dsouza is a Research Assistant with MIT Media Lab in Cambridge. His research is in the journal Nature this week. Welcome to Science Friday.

SOHAN DSOUZA: Thank you. It’s a pleasure to be on.

IRA FLATOW: Nice to have you. Why is the trolley problem the best way to think about the future of driverless cars?

SOHAN DSOUZA: Well, driverless cars promise to eliminate a large number of accidents, like the vast majority of accidents that currently happen due to human error. But in the small number of cases where you have unavoidable accidents, there may be cases of unavoidable harm– and typically, we’ve had Asimov’s laws of robotics– and those aren’t really sufficient to look at situations where and AI has to balance risks or balance harm or distribute harm. So yeah that’s–

IRA FLATOW: Yeah, Asimov’s laws– one of its laws of robotics says, that they will never harm its creator– the robot. And that may not be the case really when driverless cars come about.

SOHAN DSOUZA: Yeah, so in 2016– we actually release the moral machine website, which is the main source of data for this study. In 2016, as a companion website to a paper that might be a way among others– and others published about the social dilemma of autonomous vehicles. And that was a study about what people think of autonomous vehicles that might have to say sacrifice one passenger to save five pedestrians.

And people were found to want that as the norm, like sacrificing cars– cars that might sacrifice their passengers– but they don’t want to use one or be in one themselves. So that’s a bit of a dilemma. And we wanted to get more data about all the different factors that might go into this equation.

IRA FLATOW: And in your study, when you surveyed people all over the world, there did not seem to be one philosophy of what should happen in this situation?

SOHAN DSOUZA: Yes. There were broad global trends. Like for example, nearly every country had a preference for between male and female characters as morally significant in outcomes. They would prefer to save females. But the relative strengths of that preference varied from country to country. And we noticed that clusters more or less according to some cultural and geographic proximity.

IRA FLATOW: Such as? Give me Asia versus Middle East, places in Europe, North America. How were they all different?

SOHAN DSOUZA: So there were three main clusters. Actually Asia, as in East Asia, and some Middle Eastern countries, and South Asia kind of clustered together. We called that the Eastern cluster. And then there’s the Southern cluster, which is dominated by Latin American countries and countries of French francophone heritage. And the other countries are in the Northern cluster– oh, sorry– the Western cluster, which is mostly Western countries of Protestant or Catholic provenance.

IRA FLATOW: So the countries in Asia and the Middle East preferred to spare younger rather than older characters. It was much less pronounced there.

SOHAN DSOUZA: Yes. The general preference for sparing younger over older characters was much less pronounced in the case of the Eastern cluster.

IRA FLATOW: And in Europe and North America, they preferred to spare who?

SOHAN DSOUZA: Oh, I mean, they had– more or less that was the average. And then in the Southern cluster, that’s mostly the Latin American countries, they had a slightly higher propensity to save the young.

IRA FLATOW: Now, I took this test. It was quite fascinating. And I know I had my own personal reasons for making the choices I made. How can you tell what types of logic people are using to make these choices?

SOHAN DSOUZA: There are different creations that might go into this. I mean, the classic one is utilitarianism as we heard in that clip you played. And deontology. So utilitarianism is should we save as many lives as possible, even if that means committing to an action, like intervening. And there’s also the deontological approach, which is like do no harm. So those are the well-known ones.

And we’ve noticed, for example, in different countries there are cultural factors and economic factors even that influence what decisions people make. For example, in countries with relatively higher economic inequality, they have a relatively higher tendency– relatively higher preference for saving high status individuals versus low status individuals.

IRA FLATOW: You put that into your test. But in the scenarios they include income, gender, physical characteristics, different ages. These aren’t things that the current driverless cars can identify, are they? I mean, they’re not going to be knowledgeable of all those different things when it comes to making a decision. So is that really a practical way to study it?

SOHAN DSOUZA: It depends what you use it for. We do not really expect that we’ll just take this data and build a model and plug it into future autonomous vehicles. What we want to do is understand what the public’s reaction to an autonomous vehicle crash might be. We want to understand what fears need to be elated in order to encourage adopting of autonomous vehicles. So those are the primary goals of this. To see that conversation– to provide the ground tooth for conversation about autonomous vehicle ethics.

IRA FLATOW: You must talk to everyday people when you talk about your tests and your relatives and friends about getting in a driverless car. I mean, just between you and me, are people want to get into a car where they don’t have the option of protecting their own life? They know that the car might choose that they die instead of someone on the street. Are they going to want to buy that kind of car or get into one?

SOHAN DSOUZA: So that’s actually what the 2016 paper looked at, which was do people– and people generally prefer to not buy such a car broadly. And individually, it varies. I mean, it depends on sometimes age or familiarity with technology has been known to– as I have seen, a familiarity with the AI and AV technology affects these decisions.

IRA FLATOW: I would think that the car engineers must be– as you rightly point out– they’re doing tests about this and they are thinking a lot about this topic.

SOHAN DSOUZA: Yes. The industry is certainly considering this. I mean, Mercedes, one of their heads of autonomous vehicles, of automation, said something back in 2016 about autonomous vehicles like that they might have to save the person in the vehicle, if you can save the person in the vehicle. But then there was backlash against that. And then Ford, for example, they said– the chief of Ford said that autonomous vehicles– it will ultimately have to come from a social consensus. And that’s kind of the conversation we hope to see here.

IRA FLATOW: It’s not going to be a commercial some day where the car companies are competing for your business by saying, we’ll put you first, [LAUGHS] instead of the pedestrian.

SOHAN DSOUZA: Yeah, automakers might have different interests than say insurers and policy makers, consumer advocacy groups. There are stakeholders with– and, of course, the consumers. I mean, there are stakeholders with differing interests and they will have to have a conversation to come to a consensus about where to move. And that conversation might look different in different countries, because of the different strengths of the preferences along each of these dimensions.

IRA FLATOW: Let me see if I can get a quick phone call in. Lee in Tucson, welcome to Science Friday. Hi, there. Quickly.

AUDIENCE: Hello.

IRA FLATOW: Yes, go ahead.

AUDIENCE: Am I on air?

IRA FLATOW: You are. Go ahead.

AUDIENCE: OK. I guess my question in these ethical considerations in the survey or just in the practice itself that your guest is talking about, if there’s ever actual material consideration of just not doing it at all? In other words, are the surveys, including a question do you think that– to just think about actually just dropping this or not doing it at all? And do the people that consider the ethics of the whole thing, really actually consider just dropping it and not doing it at all? And I’m referring to self-driving cars, AI, all that.

IRA FLATOW: Yeah, you think maybe this is just a bad idea.

SOHAN DSOUZA: I mean, we think that more knowledge is always good. Ultimately, when self-driving vehicles start to come on the market in greater numbers and maybe even autonomous vehicles without any manual driving options, there’s an issue of whether people will actually take to these. And in order for that to happen– I mean, even a default decision, even a decision to randomize, even a decision to always never intervene, those are still decisions that have to be made.

IRA FLATOW: And that’s a good place to stop, because we’ve run out of time. And this is a topic we will pick up. I want to thank you, Sohan, for taking the time to be with us today. Sohan Dsouza is a research assistant with the MIT Lab in Cambridge.

Copyright © 2018 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producer

About Katie Feather

Katie Feather is a former SciFri producer and the proud mother of two cats, Charleigh and Sadie.

Explore More