08/03/2018

Is Facial Recognition Ready For The Real World?

17:00 minutes

headshots of 28 members of congress with text "Amazon Rekognition; False Matches; 28 current members of congress"
The ACLU’s test used AmazonRekognition to compare images of members of Congress with a database of mugshots. The results included 28 incorrect matches. Credit: ACLU

Facial recognition systems—the type of technology that helps you tag your friends on Facebook—is finding its way offline and into real world environments. Some police departments are using the technology to help identify suspects and companies are marketing face-identifying software to schools to increase security.

But these type of systems are not flawless. A study found that facial recognition algorithms lacked in accuracy when it came to assessing different genders and skin tones. And currently, there are no policies that regulate facial recognition technology.

[A story about how social media can be used in a good and nice way.]

Technology reporter Natasha Singer and Safiya Noble, author of the book Algorithms of Oppression, talk about what type of questions facial recognition technology brings up for tech creators, policy makers, and the general public.

Support great science journalism!

Segment Guests

Natasha Singer

Natasha Singer is a technology reporter for The New York Times in New York, New York.

Safiya Noble

Safiya Noble is author of Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018) and an assistant professor in the Annenberg School of Communication at the University of Southern California in Los Angeles, California.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow. By now, I’m sure you’ve come across facial recognition technology. It’s that little box that pops up in your photos that helps you tag your second cousin. It’s the new way to unlock your smartphone. I mean, fingerprints are so 1892.

That same type of technology is now finding its way outside of social media, the privacy of your space, and into the wide world. You’ll step out into the street, into an airport, or the mall, and your face is the place that is getting tracked.

For a few bucks, police departments can buy software from Amazon to sort through a database of faces. And you can even buy facial recognition software from Amazon. But facial recognition technology is not without flaws. For example, the ACLU ran Amazon’s software through images of people in Congress, and it matched 28 members of Congress to faces of publicly available mug shots. Incorrectly, of course.

And when we reached out to Amazon, they sent a statement. I’ll read it, because it’s only fair to do so. The results could probably be improved by following best practices. This is the percentage likelihood that is used in the test. And while 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media uses, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty. When using facial recognition for law enforcement, we guide customers to set a threshold of at least 95% or higher.

And that’s what we’re going to be talking about today. What questions should we be thinking about as these technologies are moving into the public realm? And is there an inherent bias in facial recognition artificial intelligence? That’s what we’re going to be talking about. If you’d like to join us, our number is 844-724-8255.

You can also tweet us @scifi. Let me introduce my guest. Safiya Noble is an Assistant Professor in Communications at the University of Southern California. She’s author of Algorithms of oppression, How Search Engines Reinforce Racism. Welcome to Science Friday.

SAFIYA NOBLE: Hi, thank you.

IRA FLATOW: You’re welcome.

SAFIYA NOBLE: Good to be here.

IRA FLATOW: Thank you. Natasha Singer is a technology reporter for The New York Times who has written extensively about this. Welcome to you today also, Natasha.

NATASHA SINGER: Thank you.

IRA FLATOW: Safiya, facial recognition software is a type of algorithm, right? Can you give us an overview of just how facial recognition works?

SAFIYA NOBLE: Well, in the simplest terms, we can think of facial recognition technology as kind of a type of software that is mapping various kinds of topographies, whether it’s land that we often use mapping technologies on, or our faces. It’s trying to identify certain kind of distinctive points on the topography of our faces, and then match that to known images that might exist in a database.

It’s pretty crude, even though we maybe think of it as kind of a sexy type of artificial intelligence.

IRA FLATOW: Mm-hm. And anyone can buy, Natasha, anyone can buy and download Amazon’s recognition software, which I was talking about earlier. And some law enforcement agencies are using it, right? Is it that easy, just to do that?

NATASHA SINGER: Well, it’s online, and you can download and use it. And if you can create a database of photos that you have, then you can compare photos of unknown people to the people you know in your database. And that’s what the ACLU did. They tried to mimic what a police department would do, so they created a database of 25,000 mugshots that were publicly available, and they compared photos of every member of Congress to this database of mugshots. And 28 members of Congress were mistakenly matched with mugshots.

IRA FLATOW: And Amazon says, well, they should have turned it up to 98% recognition.

NATASHA SINGER: So 95%. I think there’s two issues with that. One is that the ACLU pointed out that on Amazon’s own site, there was an example of humans set at 80% similarity score. And that was the default. So if it really should be 95% for humans, you know, the ACLU says Amazon should be saying that. And it doesn’t tell that to ordinary customers.

But I think even more problematic is, for Amazon, there is an expert researcher at MIT Media Lab, and she did a crucial study earlier this year showing that Microsoft and IBM’s facial recognition was flawed, and she’s also looked at the Amazon software, and she found it even more erroneous than the ACLU did. And it’s really hard to doubt her research.

IRA FLATOW: And Safiya, go ahead. Because I know you study racial bias as an online algorithm.

SAFIYA NOBLE: I do. I do. And I think Joy Buolamwini’s work from the MIT Media Lab is really the state of the art in terms of understanding the racial and gender biases that are built into these facial recognition softwares.

So her study has found, in fact, that facial recognition software is terrible at recognizing brown faces, brown skin. The darker kind of the tone of your skin, the less likely the facial recognition software will work. And it’s even more abysmal when you apply that to women and women of color.

And of course, this is the kind of thing that I also study which is, how is it that we come to have so many kinds of biases built into artificial intelligence, and algorithmic kind of sorting platforms, and software? And the consequences of that are not insubstantial. What we find, for example, is that many of these technologies are often facing the most vulnerable people in our societies.

So they’re deployed, for example, by law enforcement in communities predominantly of color, where poor people live, in communities that are already overpoliced. We see them deployed in terms of immigration, where they’re facing the Southern border, rather than the Northern border. So it’s not just that the software itself is technically flawed but it also is used in some rather egregious ways against the most vulnerable members of our population.

IRA FLATOW: And then, Natasha, let’s talk about and expand a little bit about about, where else are we seeing facial recognition being used in public?

NATASHA SINGER: Well, as you mentioned, if you’re on Facebook tagging your photo albums and it pops up and says, is this Jane, that is facial recognition at work. And Facebook book has gotten into a bit of trouble for that, because it was turned on by default in Europe, and they have a new tough privacy law, where you have to ask for specific consent to do that kind of stuff.

On certain iPhones and Windows laptops, you can open your device with your face instead of your fingerprint, or an alphanumeric code. So we’re seeing it be normalized in consumer technology, as well as in kind of policing.

IRA FLATOW: I heard that in schools, too.

NATASHA SINGER: There are some schools, which they think is kind of a deterrent to shooters, that they have put in face recognition to identify students. But you know the problem is that face recognition is also a control mechanism. And I had one face recognition company say to me, sure, and then we close the doors at 8:00 AM. And any kid who shows up with their face afterwards, you know, the doors will be locked to them. And so it’s problematic, both in the consumer space and in the law enforcement space.

IRA FLATOW: Safiya, is it about improving the data? We’ve talked on this program before about the biases in algorithms. In this case, is it about improving the data that facial recognition algorithms– I’m having a tough time saying that today– are trained on?

SAFIYA NOBLE: Well, certainly, that’s one dimension of it. We know that those of us who work with data know that data is something that is a human product. It’s something that scientists make, social scientists and others. And so, certainly, data is constructed in flawed ways, in biased ways. And then machines are trained on flawed data.

Machines also detect new patterns. And when we start thinking about things like machine learning and big data, it often is trained in producing new forms of data whose origin story is biased or flawed. So that becomes even more difficult than to intervene upon, because the promise and perils unfortunately of deep machine learning is that new patterns and new data will be constructed that human beings could not process on our own, with out brain capacity. And so it will be increasingly difficult for human beings then to intervene upon and recognize flawed data systems.

I think the secondary issue, though, is beyond the kind of training of machines on low quality data, is that there’s a broader kind of social, ethical framework that we need to be thinking about. What does it mean to automate decisions and outsource certain types of decisions to artificial intelligence like this, and to preclude human beings from making certain types of decisions?

How will these technologies be deployed, again, in service of whom, and against which parts of our society? And that framework for thinking through the complexity really doesn’t exist. I mean, we really don’t have an adequate legislative, kind of public policy space to talk about the negative impact of some of these technologies.

IRA FLATOW: Natasha, what kind of oversight do we need, then, of this kind of stuff?

NATASHA SINGER: Well, it’s interesting because Microsoft, a couple of weeks ago, called for government regulation of the facial recognition. They said it was too risky for tech companies to regulate on their own. And as I said, in Europe, they already regulate it by requiring consent before you suck up somebody’s facial data and identify them.

And in the United States, it’s a question of, first of all, what kind of government oversight of government use do we need? And then, what kind of oversight of consumer use? Because the main issue is, as Americans in a democracy, we have this idea that we have the right to be anonymous in public, to go to the supermarket in our pajamas, to go to a political protest freely, and not be recognized. And the question is, facial recognition threatens that, and how important is anonymity to us?

IRA FLATOW: Question is, put it simply myself to myself is, do I own my face anymore? Does my face have its own rights?

NATASHA SINGER: Well, if face recognition becomes widely normalized, you’ll go into a store, you will be recognized and matched with your Facebook account, and then you will pay with your face to check out. So it depends on what we as a society decide needs to happen on this technology.

IRA FLATOW: I’m watching Minority Report in real life, in other words, right? I mean, in that movie, they recognized you were walking into a store and pitched ads at you. [INAUDIBLE].

SAFIYA NOBLE: Yes.

NATASHA SINGER: Or no won’t you won’t be recognized, right, because the technology is biased, and you won’t get those services.

SAFIYA NOBLE: Well, and we see the outcome of some of that already. For example, in rural India, where bio identification is an important dimension of how poor people get access to food and resources, if your fingerprint, for example, fails in one of the machines, you don’t eat. And I think these are the kinds of things that we really need to be paying attention to.

And also, what does it mean about the fluidity of our identities, of the way we look, of our gender? In particular, do people have a right to change the way they look? What will the implications of that be over time? Will databases respond accordingly? And I think these are very sophisticated kinds of questions that have to do with our kind of fundamental right to be the kind of people we want to be too.

IRA FLATOW: Is there any way– I mean, I would see the point, counterpoint, spy versus spy, are people going to start trying to hide their faces by mask, by makeup, by something like that? And are people who want to do that going to be viewed badly, as bad actors? I mean, that’s a scary thought.

NATASHA SINGER: Well, it’s a really good question, especially in countries where they’re banning people from wearing veils and facial covers. Right? So the question is also whether covering up certain parts of your face is even going to work, because face recognition, as it becomes more powerful, might be able to identify certain parts of your face, your forehead, your brow. So that might not even work, even if you wanted to do it.

IRA FLATOW: I’m Ira Flatow. This is Science Friday from WNYC Studios, talking about facial recognition with Natasha Singer and Sophie Noble. Are we in uncharted territory here? You know, is this something that’s just sneaking under the radar, and it’s going to creep up on us like a lot of other technology?

NATASHA SINGER: Well, we’ve seen the automobile. We don’t even think about it. The cell phone, we don’t even think about it. They’re technologies that are part of our everyday lives. The question is, is facial recognition so different, is there something that so threatens our basic freedoms, that we have to do something about it?

And Amazon, in a blog post, argued that like, this was a promising new technology, it’s not being misused, it helps find missing children, and that it shouldn’t be regulated right now. And then, there’s another theory that like, we shouldn’t wait for harm. We can see that there is a potential for great harm, and that Congress needs to intervene.

IRA FLATOW: You said that deep machine learning and artificial intelligence will become a major human rights issue of the 21st century, and not in ways we’re maybe inclined to think. In other words, there are things we haven’t thought about that might be useful to facial recognition and, suddenly, oh, wait, I never thought about that.

SAFIYA NOBLE: Well, I do make that argument in my work, because I think there are so many consequences that we learn about after the fact. We learn about kind of the harms of everyday technologies that we use way far after the damage is done. And it becomes incredibly difficult, often, to intervene upon damages or harm, because private companies have the right to do what they want with their products and services. They don’t belong to the public. They don’t have a healthy, robust set of consumer protection laws around them.

And so, this will make it very difficult for us to think about the loss of human rights, the loss of civil rights, as we’re engaging with these technologies because, again, I think not only do we not have a legal framework to take those matters up, but we also don’t really have kind of common sense understanding of what many of these technologies are doing.

And my concern, of course, is whether it’s that people are denied food or access to resources that they need, their education, and other kinds of employment opportunities because more and more datafication leads to kind of algorithmic decision-making about fundamental distribution of resources in our society, rather than human logics, or compassion, or empathy, or other ways of knowing.

A lot is at stake as we move forward. I mean, one thing we know, for example, is that computers don’t have empathy, and they don’t really make decisions. They do a lot of matching. But there are other kinds of ways of human decision-making that are incredibly important, where we consider factors that machines cannot replicate.

And I think these are some of the unintended consequences that we really cannot even begin to understand yet. And it’s important for us to have these conversations before things go directly from some research and development lab to a venture capitalist boardroom, and directly to the marketplace, with no kind of research or policy considerations around them.

IRA FLATOW: Yeah. Just trying to keep up with stuff as it develops. We’re hoping that you both will follow and come back and talk with us about it again.

NATASHA SINGER: Thank you very much.

SAFIYA NOBLE: Thank you.

IRA FLATOW: You’re welcome. Safiya Noble is Assistant Professor in Communications at the University of Southern California, author of Algorithms of Oppression, How Search Engines Reinforce Racism. Natasha Singer, Technology Reporter for The New York Times. Thanks again.

Copyright © 2018 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Alexa Lim

Alexa Lim was a senior producer for Science Friday. Her favorite stories involve space, sound, and strange animal discoveries.

About Lucy Huang

Lucy Huang is a freelance radio producer and was Science Friday’s summer 2018 radio intern. When she’s not covering science stories, she’s busy procrasti-baking.

Explore More

What You Said: Your Physics Questions, Explained

You told us which physics concept has always stumped you. Two physicists weighed in.

Read More