06/19/2020

Protests Shine Light On Facial Recognition Tech Problems

24:24 minutes

an abstract illustration of a face with geometric lines drawn over it
Credit: Shutterstock

Extended Cut

In a special extended interview, listen to Ruha Benjamin and Deborah Raji discuss practical steps for developing artificial intelligence technologies in a way that leads to more equity and equal justice—whether it’s tech companies being ready to slow down, or government standards that take racial justice into account.


a blue paint circle badge with words in white that say "best of 2020"Earlier this month, three major tech companies publicly distanced themselves from the facial recognition tools used by police: IBM said they would stop all such research, while Amazon and Microsoft said they would push pause on any plans to give facial recognition technology to domestic law enforcement.

IBM CEO Arvind Krishna explained the move was because of facial recognition’s use in racial profiling and mass surveillance. Facial recognition algorithms built by companies like Amazon have been found to misidentify people of color, especially women of color, at higher rates—meaning when police use facial recognition to identify suspects who are not white, they are more likely to arrest the wrong person.

Nevertheless, companies have been pitching this technology to the government. Just this week, the American Civil Liberties Union (ACLU) uncovered documents showing Microsoft has been trying to sell facial recognition to the federal government’s Drug Enforcement Administration since at least 2017, calling into question exactly how to define ‘police.’ Beyond facial recognition, Amazon’s tech already helps power the databases of government agencies like ICE that are responsible for the recent crackdowns on immigration.

CEOs are calling for national laws to govern this technology, or programming solutions to remove the racial biases and other inequities from their code. But there are others who want to ban it entirely—and completely re-envisioning how AI is developed and used in communities.

SciFri producer Christie Taylor talks to Ruha Benjamin, a sociologist, and AI researcher Deborah Raji about the relationship between AI and racial injustice, and their visions for slower, more community-oriented processes for tech and data science.


Further Reading

Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.

Donate

Segment Guests

Deborah Raji

Deborah Raji is a technology fellow in the AI Now Institute at New York University in New York, New York.

Ruha Benjamin

Ruha Benjamin is author of Race After Technology: Abolitionist Tools for the New Jim Code and a professor of African American Studies at Princeton University in Princeton, New Jersey.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow. It’s been a big couple of weeks for facial recognition technology. IBM said it would stop using it entirely. Meanwhile, Microsoft and Amazon have paused allowing police to use their facial recognition technologies, at least until there is a national law to ensure its use doesn’t perpetuate racial inequities. Science Friday Producer Christie Taylor talked to two experts who have a different idea– ban facial recognition entirely, and rethink how we develop new AI technology while we’re at it.

CHRISTIE TAYLOR: So is this a national moment for facial recognition too? If so, it’s been building for a few years. San Francisco banned facial recognition use by police and other government agencies last year. And research revealing huge disparities in how accurate facial recognition is, that’s been around since MIT research in 2018, which found that facial recognition is most accurate if you’re a white man, and least if you’re black, a woman, or both.

We’ve talked about technological solutions for the biases that can be built into AI before. But like the national conversation around policing, there are people who don’t just want to reform AI, but actually stop investing in technologies that are too harmful to reform. Here to talk about why that is, Dr. Ruha Benjamin, Professor of African-American Studies at Princeton University, and author of Race after Technology, Abolitionist Tools for the New Jim Code. Welcome back to Science Friday, Dr. Benjamin.

RUHA BENJAMIN: I’m thrilled to be here. Thanks for inviting me.

CHRISTIE TAYLOR: And also here we have Deborah Raji, a Technology Fellow at the AI Now Institute at New York University. Thank you for joining us, Deb.

DEBORAH RAJI: Lovely to be here. Thank you for inviting me.

CHRISTIE TAYLOR: Yeah, you’re Welcome. Ruha, I’m going to start with you. Because in the last week, we’ve seen IBM say they were divesting entirely from facial recognition. And Amazon and Microsoft say they would stop selling their products to police, at least for a while. IBM CEO Arvind Krishna condemned the use of facial recognition software in racial profiling and mass surveillance. Why are we seeing this now?

RUHA BENJAMIN: I mean, I think it speaks to the power of protest, the power of public condemnation against policing in general, the police abuses we’ve seen. And companies far and wide are trying to distance themselves from what people are rightly criticizing. And so, in addition to these tech companies, we’ve seen everything from Hollywood movies and shows, Cops being taken off the air, to NASCAR banning the Confederate flag. So I think it’s part of the spectrum in which people are understanding this cultural shift is not going anywhere. So they have to respond in kind.

CHRISTIE TAYLOR: Deborah, how widespread is police use of facial recognition technology? Will these moves make any kind of a difference?

DEBORAH RAJI: Yeah, I want to just emphasize that since summer of 2018, there’s been so much attempt, and so much effort from ACLU, but also a lot of other advocates and technology advocates to try to expose and discuss the reality of the use of facial recognition by police. So ACLU, since summer of 2018, and likely even before that, has been sort of investigating Amazon in particulars interaction and attempt to sell that technology to police departments.

Amazon on their website advertises at least one police department, which is Orange County, that we know of. But there’s also sort of reports from different groups, including workers at Amazon claiming there to be more clients. And ACLU themselves identifying sort of pitch decks to departments in Orlando and other regions. And even if just the one department that is advertised on their website, even if that one department is making use of facial recognition, that’s still affecting thousands and thousands of people.

So it really is sort of an impactful decision for them to pull away from the technology. Especially has this more nuanced conversation around its more widespread use is happening. I’m very grateful to sort of see the conversation get to the point where they understand that this technology cannot continue to be sold while the policy conversation is happening. So yeah, I do think that it is an impactful decision. And it will sort of be directly correlated to protecting several people, to the order of thousands of people, from harm.

CHRISTIE TAYLOR: Why policing? Why is that where so much of the worry about facial recognition is concentrated, Deb?

DEBORAH RAJI: It’s not just policing, to clarify. There’s use of facial recognition in certain hiring tools, like we saw with HireVue. There’s a lot of interest from the Department of Homeland Security in using facial recognition as part of the immigration process. It very much is a part of the fabric of American life in different ways.

I think policing is an alarming one. And facial recognition is this technology that is very easily manipulated and very centralized. You have a lot of identifiable biometric information about a lot of people. It requires a certain amount of compute to like create the model. To put them in the hands of an authority figure that we’re beginning to question and we’re beginning to distrust, I think is really at the heart of a lot of this conversation that we see today, around do we trust the police with this technology that can be so easily weaponized?

And then also, historically looking at why facial recognition was sort of encouraged to be developed in the first place. A lot of the early investors in the technology, like the National Institute of Standards was the first group to really build a lot of these big base data sets to sort of kick start the industry in the US. And a lot of their early funders were coming from intelligence agencies, with an eye towards law enforcement, thinking about mug shots.

We have a lot of face data connected to the law enforcement paradigm in the US. So it’s very easy to use facial recognition for that purpose. And in the last couple of weeks, we’ve seen how, just because it’s very easy to use it, and it’s this very important tool for them, does not necessarily mean that they are the right people to entrust with this tool.

RUHA BENJAMIN: And I would say one of the dangers is that we sort of take this win and then become complacent.

DEBORAH RAJI: Yeah.

RUHA BENJAMIN: Because the line between law enforcement and so many of other institutions is very porous. And so, for example, when schools use facial recognition– UCLA was about to implement a facial recognition system to look at people as they were coming on campus, to determine if someone was an actual student, faculty, or staff. And when Fight for the Future, a digital rights non-profit, analyzed the system, it found that it came back with 58 false-positive matches that were largely students of color.

So you could imagine that a black student is walking across campus, falsely flagged as an intruder, and the police are called, and what will happen in that instance. And we see what’s happening on the street, when the police arrive, and decide that a black person is to be targeted. And so in this case, when educational institutions, private sector companies, all kinds of public spaces employ this, the police are right behind them.

And so it’s not enough just to ban it on the part of police, when other institutions and entities use a tool that’s not only scientifically faulty, but also one that has deep racist roots.

CHRISTIE TAYLOR: I want to talk more about those inaccuracies that you just referred to, because Deb, I know you were a co-author on some of the research that uncovered those disproportionate inaccuracies. Tell us more about that.

DEBORAH RAJI: Yeah, I was sort of involved in this project led by Joy Buolamwini at the Algorithmic Justice League. And at the time, she was a grad student at the MIT Media Lab. And she sort of was able to identify the fact that in computer vision research, the way that these models were trained and evaluated were on these test images that didn’t necessarily represent the full scope of the populations that they were being implemented on.

This reflects sort of my early experience, where I was working on an applied machine learning team, and I was noticing that a lot of the data sets that I had to work with did not include anyone that looked like me. So there were not a lot of darker skinned people. There were not a lot of even women. So Joy really led the effort to really ask the question of what would happen if we created an evaluation test set that actually represented the full range of skin types that we have, so darker skin and lighter skin, and was balanced with respect to those different skin types, but also gender, and looking at the performance at the intersection of these axes.

So she created this project called Gender Shades, that was really that first critical evaluation of how does this deployed product– and this is something that I like to remind people, is at the time when we audit these systems, they’re already out there in the world. They’re products that are already sold, already integrated into applications, who knows where.

So we said, looking at these products that have already been deemed good enough to throw out into the world, how well does it actually work for these different subgroups. And what we found was that there was almost a 30% between the darker female subgroup and the lighter male subgroup. And there was a first round of audits on IBM, Microsoft, and Face++, which is a Chinese facial recognition company.

There was there was a very public response to that initial audit. And we were thinking, oh, maybe this represents a shift with respect to the industry. So we did a follow-up audit to sort of see how the companies that we had audited responded, but also some of these other companies, including Amazon. And what we found was that even after being witness to other companies getting audited, and understanding that there was a racial bias issue that existed within the facial recognition space, Amazon still demonstrated disparities of over 20%, 30% between the darker female subgroup and the lighter male subgroup.

I remember when I first began to just holistically notice, like, oh, there’s not a lot of black people in these data sets when I started working. When I first noticed these things, I remember trying to have conversations with my manager at the time. And he was kind of like, it’s so hard to collect data at all, why would we think about representation, why we think about diversity? It was such an ingrained attitude at the time, to ignore the problem, because it was just too hard.

And I think now, we’re at a point where there’s like now this acknowledgment that like no, we should we should create representative data sets, but also this is a great starting point to really questioning the functionality of these technologies. Like does facial recognition really work if they were already deploying a version of this technology that did not work for black people, or for darker skinned people?

Does it really work if when they try to attempt to diversify the data sets, there’s privacy violations that are discovered? Does it really work if it’s so easily weaponized by institutions that we no longer trust? So a lot of these questions really just spewed out of that project. So I’m grateful to have participated in that.

RUHA BENJAMIN: And one of the things I really appreciate about your work, Deb, about Joy’s work, is that after the initial technical faultiness of these systems were revealed, the goal wasn’t simply to perfect the system to make them more accurate at detecting people, when the actual mechanisms of identifying people are themselves unjust and unethical. And so the whole project is not simply about honing these tools, but to actually use the faultiness to pose these larger questions about whether we want these at all.

And so the goal is not simply better tools and more accurate tools, when that would only likely just lead for a more honed injustice in the process, better able at identifying the most vulnerable in our communities. And so, I just wanted to add that, is that we’re not questioning simply the scientific merit of these systems, but their ethical and their political merit.

CHRISTIE TAYLOR: Just a quick reminder that this is Science Friday. I’m Christie Taylor. Talking about rethinking AI with Dr. Ruha Benjamin and Deborah Raji. Ruha, you mentioned this idea of a more honed injustice. What kinds of harms exactly do you see from something like that?

DEBORAH RAJI: I think of facial recognition as part of a whole family of technologies, automated AI based technologies, that have been rolled out under the guise of neutrality, what I call the new Jim Code. We see it when it comes to administering public benefits. We see it in health care. We see it in our prison system.

So for example, in the midst of the pandemic, there’s been a lot outcry about the overcrowding in our jails and prisons. And so one of the responses has been a technical fix. Let’s use a risk assessment tool, one called Pattern, to decide who is the least risky to release so that we can deal with this overcrowding. And Pattern, this risk assessment tool was found– first of all, it’s scientifically unverified. And then those who have audited it have found that 7% of black men were classified as minimum risk and able to be released, compared to 30% of white men. And there was biases associated with homelessness and mental illness.

And so here is an example of a technical fix that’s posited as a solution for some pandemic related crisis, in this case overcrowding in prisons, that has this racial bias baked into it. And this is just one of many, and one of the most recent, in which we see that the turn to automation and automated decision systems very likely the default settings will lead to an exacerbation of inequalities, existing inequalities. And hiding these inequalities behind a veneer of neutrality and objectivity that makes it even harder to question and hold accountable.

If it was a biased judge sitting up there, or a biased prosecutor, at least you could point to the person. But in this case, people point to a screen, and say, this thing can’t make decisions. This thing doesn’t have of a grudge against people or a hatred against people. And yet, baked into it are patterns of profiling and discrimination that then get hidden behind a statistic or a score. And so it cuts across almost every institution– education decisions, health care decisions, public benefits. It’s penetrated every area of our lives. And many people don’t even realize that very consequential decisions in their lives are being made by automated systems that are exacerbating inequalities.

CHRISTIE TAYLOR: Deb, do you have any other examples you would point to?

DEBORAH RAJI: I like to sort of remind people that a face is sort of the equivalent of a fingerprint, with respect to its role and its status as an identifiable biometric. Like, we do not upload pictures of our fingerprint to the Internet. We’re very careful about that data. And we should be just as careful about face data. So because our faces are these identifiable things, and we upload them so freely, a lot of companies, including Clearview, have found a lot of success in terms of this idea of digital surveillance, and just being able to match different profiles online using the photos that people post. Being able to track people for the sake of whatever authority figure.

So there’s stories of ICE tracking particular suspects using their information of their different social media outlets, using their face data. So I see that being an alarming example of the use of facial recognition outside of the CCTV camera and identifying your face as you’re walking down the street. There’s a lot of online information that facial recognition makes its way through and organizes for other people.

And then the other example that I think of often is the case of the Atlantic Towers Apartment with the Brooklyn tenants. It’s a case I reflect on a lot. So to just give a short recap, there’s these tenants in this rent controlled building in Brooklyn. And they find out that their landlord had a history of racial bias, and has incentive to evict tenants, wants to install a facial recognition system. And the tenants are against the facial recognition system.

And it’s a recorded case that on the surface feels like it’s a majority black community, maybe they’re worried about accuracy, or maybe they’re worried about privacy, because the data is not encrypted, and they’re not sure where the data’s going to go. When I started having conversations with the tenants, I realized their fear was really the way that the landlord could very easily weaponize that technology to monitor them, to monitor them coming in and out. It was like a threat to their safety in that sense of the technology being weaponized as a method of controlling that environment, and really putting them at risk with respect to this authority figure that they couldn’t trust. So, yeah, those are I guess some of the cases that I reflect on a lot.

IRA FLATOW: We have to take a break, but when we come back, more on reimagining our relationship with artificial intelligence. Plus NASA has signed a contract with a private company to deliver a Rover to the South Pole of the moon to look for water. You’re going to want to hear that. So stay with us.

This is Science Friday. I’m Ira Flatow. In case you just joined us, we’re talking about facial recognition, artificial intelligence, and developing technology without harmful biases. Producer Christie Taylor spoke with Dr. Ruha Benjamin, Professor of African-American Studies at Princeton University, and Deborah Raji, a Technology Fellow for the AI Now Institute at New York University.

CHRISTIE TAYLOR: Ruha, I want to go back to police, because I saw on your Twitter feed the other day, you said, with technology, we can police without the police. What did you mean by that? And is that a good thing?

RUHA BENJAMIN: Yeah, absolutely. And what’s interesting is that people interpreted it– some people, obviously, who were commenting, seemed to interpret it as a kind of encouragement of policing without the police, when it was a critique. And, in fact, Deb’s last example is a prime example. Because in this case, you have a private housing developer implementing facial recognition and exercising forms of containment, and control, and surveillance without someone standing there with a badge and a uniform checking people out.

So that’s an excellent example of the same logics and practices of policing that keep a watch on people, that profile people. All of those things that can be exercised without the institution of the police. So my concern is now, in this moment, when we’re focused on defunding the police, that we stop looking at the ways that racism is mercurial. It takes different forms. And as soon as you– you may lessen the number of policing in your town or city, but other institutions take up the work of policing by, for example, implementing facial recognition or other types of surveillance tools.

The idea of abolition that is becoming more mainstream in this moment, it has a dual meaning. That is, to destroy and to grow in the etymology of this word, [FRENCH]. And so we have to think about what we want to get rid of, but also what we want to grow. Because if we’re not growing alternative institutions, practices, and ways of life, then that old institution is going to take on a new form. It’s going to shape shift. It’s going to be exercised through various kinds of more invisible forms of policing, that again will be hidden behind a veneer of neutrality.

CHRISTIE TAYLOR: What about the conversation about abolishing versus reforming the police? Is there a parallel in the AI world? We need better training data versus this technology is too dangerous to exist?

DEBORAH RAJI: Yeah, I think the conversation, one form that the conversation takes is to be wary and to push back against tech fixes for social problems. Even now, there are people who are positing various kinds of apps to help deal with police violence. And so finding a technical fix that papers over the deep roots of a problem means that you might deal with certain symptoms of an issue, but the underlying issue will come up in a different form if we don’t keep our eye on that.

And so, just thinking about police reform first, we know that in Minneapolis, that police department had implemented so many of the different reforms that many people call for. They implemented implicit bias training, cams, community engagement, and mindfulness training, all of the things we can call for. And yet, he still died. He was still murdered. And so, again, it underlies that simply tweaking an institution that was born out of slave patrols, that was born out of a desire to contain people, is never going to get us where we want to go. So again, with tech fixes we have to think about what is the underlying cause.

For me, the one area where I think AI, and data collection, and all, has a role to play is when we flip the lens back onto those with power and actually use it to expose issues, not necessarily try to fix them. So for example, when it comes to housing discrimination, there’s a wonderful initiative called the Anti Eviction Mapping Project, which turns the lens onto landlords, rather than tenants. And it looks at the practice of evictions. It looks at different cities. And it finds patterns of evictions. It lists worst evictors in different cities.

And now, during COVID, it’s paying attention to the way that as the various kinds of eviction moratoria are running out, what are the underlying issues that are causing this. So when we turn the digital lens onto those who monopolize power and resources, and use it to expose problems, I think it has a role to play. But again, we have to think about what question are we posing that we want the technology to answer. And there, you’re going to find the seeds of either the ability to subvert power or really reinforce the existing power relations.

RUHA BENJAMIN: Yeah, I think reform, especially in the case of post Ferguson, a lot of the reform measurements that were proposed required investment in the police. The proposal for cameras, for example, like that actually gave more money to the police departments to invest in that technology. And we see that sometimes with response to revelations of bias in the technology. People will be like, oh let’s just like invest more in facial recognition to address the bias.

And I think putting that on its head, and asking to defund that technology, defund the police, or ban it, I think that’s like such an incredible counter-narrative. To say, actually, rather than investing more energy into this space, why don’t we actually just take a step back and completely reinvent the scope of solutions we’re thinking of for the issues that are– the real underlying issues that we’re trying to address here.

A lot of AI researchers are definitely going through a phase of understanding that maybe we don’t want to invest more of our resources, and time, and effort into improving facial recognition. Maybe there’s just so many dimensions of concerns here that we need to just like take a step back from this field and let it go. And also, really advocate for the big tech companies, and also the smaller tech companies, to stop the sale of this technology, and really restrict its use in a significant way by advocating for that kind of policy.

CHRISTIE TAYLOR: Well, it was really wonderful to talk to you both today. Thank you so much for your time.

RUHA BENJAMIN: Thanks for having me.

DEBORAH RAJI: Thank you so much for having us.

RUHA BENJAMIN: Yeah, this is awesome.

CHRISTIE TAYLOR: Dr. Ruha Benjamin, Professor of African-American studies at Princeton University and author most recently of Race After Technology, Abolitionist Tools for the New Jim Code. And Deborah Raji, a technology fellow for the AI Now Institute at New York University.

And just a quick note, this interview was actually much longer. Keep an eye out on the Science Friday podcast feed for Ruha and Deborah’s policy wish list and vision for more ethical tech coming soon, wherever you get your podcasts. Plus, you can learn more about their work and more about calls to ban facial recognition technology from communities, like Detroit, on our website, ScienceFriday.com/communityAI. For Science Friday, I’m Christie Taylor.

Copyright © 2020 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producer

About Christie Taylor

Christie Taylor was a producer for Science Friday. Her days involved diligent research, too many phone calls for an introvert, and asking scientists if they have any audio of that narwhal heartbeat.

Explore More

Seeking Algorithmic Justice In Policing AI

AI researchers and advocates discuss abolishing facial recognition tech—and why gradual reforms aren’t enough.

Read More