09/14/2018

The Algorithms Around Us

23:10 minutes

glowing red neon bail bond sign on building in dark street
In California, an algorithm now determines who gets bail. Credit: Wikimedia Commons

Last month, California passed a bill ending the use of cash bail. Instead of waiting in jail or putting down a cash deposit to await trial at home, defendants are released after the pleadings. The catch? Not everyone gets this treatment. It’s not a judge who determines who should and shouldn’t be released; it’s an algorithm. Algorithms have also been used to figure out which incarcerated individuals should be released on parole.

[One way to get a grip on the Fibonacci sequence? Study the hands of primates and non-primates.]

Mathematician Hannah Fry and computer scientist Suresh Venkatasubramanian join Ira to discuss how algorithms are being used not only in the justice system, but in healthcare and data mining too. And this algorithmic takeover, they say, could have a dark side. You can read an excerpt from Hannah Fry’s forthcoming book, Hello World: Being Human in the Age of Algorithms here. 

Support great science journalism!

Segment Guests

Suresh Venkatasubramanian

Suresh Venkatasubramanian is a Professor at the School of Computing at the University of Utah in Salt Lake City, Utah.

Hannah Fry

Hannah Fry, Ph.D., is a lecturer in the Mathematics of Cities at the Centre for Advanced Spatial Analysis at UCL, and is co-author of The Indisputable Existence of Santa Claus (The Overlook Press).

Segment Transcript

IRA FLATOW: Last month, California became the first state to end cash bail. Instead of waiting in jail or putting down a cash deposit to go home or wait for a trial, defendants are now being just released after the pleadings. There is a catch, though. Not every defendant gets this treatment.

The judge who decides who should and should not be released, he bases his or her decision on the advice of an algorithm. Algorithms have also passed judgment on which inmates get released on parole. Algorithms are used in the car industry, the medical world, and of course, in determining your social media feed.

Have algorithms made a decision for you or someone you know? We want to hear about it. Give us a call. 844-724-8255. That’s 844-SCI-TALK. You can also tweet us. @SciFri.

Here to talk about how algorithms have found their way into our everyday lives is Hannah Fry. She’s a mathematician at the University College London, author of the new book “Hello World: Being Human in the Age of Algorithms.” We have an excerpt at sciencefriday.com/helloworld. She joins us from the BBC. Welcome to “Science Friday.”

HANNAH FRY: Hello. Thank you very much for having me.

IRA FLATOW: You’re welcome. And now, algorithms have been around for a long time, so why write about them now?

HANNAH FRY: Well, I think that things have changed in the last few years. You’re absolutely right. Even the examples of algorithms being used to decide whether or not someone should get bail. I mean, this has a very long history. It dates back to the 1920s, 1930s with the very simplest kinds of algorithms.

But I do think that something has changed in the last five or 10 years, anyway. I think what’s changed is the amount of data that is collected on us and how that data is analyzed and then used to predict behavior.

But I also think that with the advent of artificial intelligence, the amount of power that algorithms are being given and the number of situations in which they’re being deployed, really, to make decisions about our lives is only increasing. And I thought that was something that was really sort of, well, quite timely, really. I think it was important to really put that all in one place.

IRA FLATOW: You write in your book that the only way to objectively judge whether an algorithm is trustworthy is by getting to the bottom of how it works so you right the error, a lot like magical illusions.

HANNAH FRY: It’s true. It’s true. I think on the surface, a lot of this stuff, especially artificial intelligence, it looks like it’s actually magic, you know. It looks like it’s wizardry. But very often, when you dig behind the surface and look at how the trick is done, there is often something incredibly simple lying behind the scenes. And often, actually– well, and at least occasionally– there’s things that are quite worryingly reckless there, too.

IRA FLATOW: And you say that’s really because your book is about humans, right? They’re the people who write the algorithms.

HANNAH FRY: Yeah. I don’t think that you can really separate the two. I don’t think that you can look at algorithms in isolation. I think that you have to accept that when they’re out, they’re in the world. They’re being used by people, about people, and all of us have these really inherent flaws in us. We have all kinds of subconscious biases.

We have issues where we over trust what a machine tells us, and then, at the other end of the spectrum, we’re very good at dismissing any machine that makes any kind of mistake whatsoever and thinking that we know better. And you know, that’s happening within the people who are creating these algorithms, too. And I think that we have to kind of think of this as humans and machines together, not just how good is the artificial intelligence on its own?

IRA FLATOW: So it’s not a question of when you trust the machine over your own judgment or not?

HANNAH FRY: Well, you know, I think that it’s different in different cases, really. I think that there are some situations in which all you want is the best prediction that you possibly can. You just want the most accurate prediction. And in those cases, I think if an algorithm can prove– and it’s quite a big if there– if an algorithm can prove that it can make a better prediction than a human can, then I think that that’s sort of a situation in which you want to hand over some level of control.

An example of that might be in some of the cancer diagnosis algorithms that are existing now that are screening biopsy slides and looking for tumors. Now, they have their problems, and I think that you have to work carefully in the way that you design them to work around those, but if it can, if those algorithms are more sensitive than a human pathologist looking at hundreds of these slides every single day, if the algorithm can pick up on really, really tiny clues hiding amongst your cells as to what your future holds in store for you, then I think in that situation, actually, you should give up some control and trust the algorithm perhaps over just a human on their own.

But I think there are other situations, particularly in the criminal justice system, where I think we have to be really, really careful and think very long and hard about how much control we hand over and the ways that we do that.

IRA FLATOW: Let me expand on that because I’m going to bring in another guest, Suresh Venkatasubramanian, who is a professor in the school of computing at the University of Utah here in Salt Lake. He’s a board member of the ACLU in Utah, and he’s here at KUER in Salt Lake. Welcome back.

SURESH VENKATASUBRAMANIAN: Thanks for having me.

IRA FLATOW: You’ve looked into these issues about parole algorithms, right? Are they good? What’s the plus and minuses about them?

SURESH VENKATASUBRAMANIAN: Well, first of all, I just want to commend Hannah. The book is awesome. I think the kind of nuance that you bring to it and that you just described is exactly more of what we need in these discussions, rather than kind of binaries we’ve been talking about. So thank you for the book.

HANNAH FRY: Thank you.

SURESH VENKATASUBRAMANIAN: So I think, as Hannah points out, some of the challenges– and I think California’s discussion sort of brings this up into sharp relief– are that there are often, in these situations, laudable goals. The idea of reducing pretrial incarceration. The idea of eliminating money bail. The idea of just not punishing people because they’re poor. These are laudable goals, and I think it’s worthwhile to see whether machine learning can help us achieve some of these goals.

The problem is these issues are remarkably, impressively subtle. As Hannah points out, just because an algorithm makes a recommendation, it does not mean a judge is required to or will take the recommendation. And it’s not clear exactly, as you mentioned, how the algorithms do make predictions. And it’s not clear whether the data being fed in to train the algorithms– and this is something we don’t often talk about– has the right signals in it to capture exactly what you’re trying to capture.

There are lots of– so for example, shifting slightly away from pretrial and going to parole, one of the goals of paroles sort of modeling is to understand whether someone will reoffend after being released, and that would be considered a bad thing. But if you measure, for example, of people are rearrested, that’s a subtly different thing you’re predicting.

You’re not predicting whether they will recommit a crime. You’re predicting whether they’ll be rearrested. And if that data is being used to model reoffense rates, then you get a completely different system to what you expected to get. And so there are many, many subtleties that require extensive domain knowledge, and so just taking a black box algorithm and putting it in is really not going to help you.

IRA FLATOW: And Hannah, you write a lot about that in your book, also.

HANNAH FRY: Yeah. I do. I mean, to sort of add to that point there, I think that within these systems, if you are using the whole history of all of the arrests that have happened in the past and using that to kind of project forward into the future, then inevitably, within all of that data, you are going to be encoding into your algorithm centuries of bias and unfairness, really.

I mean, the analogy that I like to give is if you do a Google image search for maths professor or math professor, as you might expect, an awful lot of the top 20 images are going to be white men. And actually, the statistics of what’s reflected back to us are pretty accurate. They do reflect what happens in universities around the world. The vast majority of math professors are, indeed, white men.

But I think that there’s a really strong argument that sometimes, you don’t want technology to be a mirror of a society. You don’t want it to reflect the kind of history that we have that led us to this point. You want it to help us move towards a better society and move not just in the right direction– what that right direction is and how you should go about it, I mean, that’s a whole other question. But that’s one that exists completely outside of the algorithm itself.

IRA FLATOW: I’m Ira Flatow. This is “Science Friday” from WNYC Studios. Talking about algorithms with Suresh Venkatasubramanian and also with Anna Fry, author of the new book “Hello World.” Lots of phone calls. A lot of people want to get in on the conversation. I think we’re going to go to them now. Let’s go to San Antonio. Simeon in San Antonio, welcome to “Science Friday.”

SIMEON: Hi. Thanks for taking me. I have a commercial driver’s license, and about 10 years ago, I left a job on good terms and I left the vehicle. It had several scratches on it. And I had no idea, but they had reported it as an unreported accident. It looks like a serious accident happened, and I wasn’t even aware of it.

And years later, when I was looking for a job, I couldn’t get anybody to hire me. I had no explanation of why I was unable to get hired. I looked into it many years after that, and I saw that there was and I reported on my record. And it was just real hard to resolve. So to this day, it’s hard to get on with large companies that have algorithms hiring.

IRA FLATOW: Let me get a comment on that. Hannah.

HANNAH FRY: Yeah. I mean, it’s an appalling story, and it’s something that happens just depressingly often, that as soon as your information, your data, a mark has been against your name– once it’s put into an algorithm into a computer, then suddenly, it takes on this air of authority that makes it almost impossible to argue against.

And I really think that we shouldn’t necessarily just be thinking about, how perfect can we get artificial intelligence? How perfect can we get algorithms to be? We should always be thinking about, how can we design them for redress? How can we design them to be appealable? Because stuff like this really shouldn’t happen.

IRA FLATOW: Yeah. Suresh.

SURESH VENKATASUBRAMANIAN: I think this brings up another important issue that I think has not been fully appreciated, I think, also, by the tech community and the larger world. When you put algorithmic decision making, you’re putting it into a system. It is not existing in a vacuum.

And so it is not enough to merely evaluate how the algorithmic system works. You have to evaluate how it affects the parts around it, as well. In this example, you’re talking about algorithmic hiring or you’re talking about the fact that one erroneous data point made an effect. First of all, we know that a lot of algorithms are very sensitive to small changes in data, so one small mistake can make a huge difference.

And when you recognize that they’re part of a larger biplane, then, you think about checks and balances. You think about humans in the loop. You think about a larger system of decision making of which algorithms should be one part. And we don’t design our systems that way. We sell them as black boxes that can replace humans, and that’s really the wrong way to think about this process.

IRA FLATOW: We have a lot more to talk about. We’re talking about algorithms. We welcome you to participate. You can also tweet us. @SciFri, S-C-I-F-R-I. We’re going to take a break and come back and talk more with Hannah and Suresh. Stay with us. We’ll be right back after this break.

This is “Science Friday.” I’m Ira Flatow. We’re talking this hour about how algorithms influence our lives and how we need to be careful to design them, when we sit down and design them, with fairness in mind. My guests are Hannah Fry, author of the new book “Hello World.” It’s a great book. Again, it was just example after example. It’s a terrific book. She’s associate professor of mathematics of cities at University College London.

HANNAH FRY: It rolls off the tongue.

IRA FLATOW: It does. Yes. You’re the second professor I’ve had from that university, and I make the same mistake all the time. Wait ’till you try it. And also, Suresh Venkatasubramanian, who is a professor in the school of computing at the University of Utah in Salt Lake City. Both here.

Our number– well, it’s so full up, I’m not going to give our number out because we’ve reserved a spot for you to join in. Let me go right to the phones. Let’s see. Where are we going to go? Let’s go to– OK, let’s go to Fern. Is it Fern in Alexandria?

FEN: It’s Fen. F-E-N.

IRA FLATOW: F-E-N. My eyes aren’t working critically today.

FEN: Thank you so much for having me on, Ira.

IRA FLATOW: Go ahead.

FEN: I’m actually a patent examiner for the US patent office in artificial intelligence, and we see algorithms all the time and in such a variety of manners. Your guests were talking about crime systems, and actually, I’m actually examining a patent that’s based on price, and it’s really interesting. And I just wanted to make that comment that there’s algorithms that do everything, and how important it is to the patent examining process.

IRA FLATOW: Just as a patent– are you still there, Fen?

FEN: Yeah. I’m still there. Can you hear me?

IRA FLATOW: Yeah. As an examiner, how schooled do you need to be– how up do you have to be on AI technology and designed to judge the patent?

FEN: Sure. So one of the best things about patent examining is that you’re learning all of the time. Every AI pattern that we examine is new, and the algorithms that they’re doing are all new. I actually have a degree in electrical engineering from Drexel University in Philadelphia. It’s very helpful to have a very solid background knowledge because you’re expected to know what these algorithms do so that you can examine the novelty of these patents.

IRA FLATOW: I got it. Thank you for taking time to be with us. Thanks for that call. So what do you think, Suresh? She says algorithms are very important for getting a patent, and she’s looking at all of these.

SURESH VENKATASUBRAMANIAN: I’m not surprised that she’s seeing all of these patent applications. I mean, to some extent, there is a lot of hype. There’s a joke, at least in the research community. Any time you take some data and put it an Excel spreadsheet, someone’s going to market it as AI now. So there’s a whole spectrum of things where they really are using sophisticated methods to really, you’re just kind of aggregating things in a box, and that’s not AI.

IRA FLATOW: That’s interesting because Hannah, you say in your book that we need something like the Food and Drug Administration. Not the FDA, but sort of a mechanism like it to be able to judge how good an algorithm is.

HANNAH FRY: Yeah. I mean, I just find it extraordinary that there is this process that exists for testing the novelty of an algorithm to sort of say you can protect intellectual property– which is, I think, exactly the right thing to do; you need to have that process– but there’s no other system that tests whether the benefit that it offers to society outweighs the cost.

It used to be the case that you could just chuck any old colored liquid in a glass bottle and sell it as medicine and make a fortune from it, but you’re not allowed to do that because it harms people and it’s just not a morally good thing to do. And I think that we’re sort of at this stage where we’ve been living, really, in the Wild West of data and algorithms where people are essentially allowed to use anything that they’ve created on members of the public. And I’d really like to see that FDA style regulation come in where you have a group of experts behind closed doors protecting intellectual property, but really kind of assessing the benefits that these algorithms offer to society.

IRA FLATOW: Let’s move on to something you touched on earlier, and that’s algorithms in medicine and in diagnostic medicine. You mentioned how good algorithms are for sorting through data like slides, picking out possibly cancerous slides versus non-cancerous slides. But so far, even IBM’s Watson hasn’t been good at sitting down with a patient who walks in and says, my stomach hurts. What is it?

HANNAH FRY: Yeah. It’s true. It’s a very tough thing to do.

IRA FLATOW: Why is that so tough for an algorithm?

HANNAH FRY: So, I mean, there are some claims that there are systems that are as good now as human doctors. There’s one here in the UK called Babylon, which has kind of been making a lot of news recently and making a lot of headlines recently. That work hasn’t yet been peer reviewed, so I’m sort of holding back celebrating it until, I think, that process happens.

But that’s so much harder than just diagnosing or spotting tumors in an image because it’s really open ended. So if you’re training a machine on looking at biopsy slides and finding tumors, you can send it hundreds of thousands of examples, get it to work through them itself, and tell it when it gets things right or wrong.

But when it comes to diagnosis, I mean, there could be anything wrong with you, right? You could walk in with any possible number of conditions and describe it in any possible number of different ways, and the knowledge graph that’s required to kind of fit all of that information together that’s held in the head of just a general practitioner is a really, really, really difficult challenge.

IRA FLATOW: Suresh.

SURESH VENKATASUBRAMANIAN: In fact, I mean, one of the oldest sort of applications, or at least proposed applications, generally in AI was in expert systems, and one of the applications was that the expert system could do a diagnosis for you. And I remember as a child sort of looking at some basic expert systems written in Lisp and seeing their claims to sort of be able to do diagnosis, so there’s a long history of AI in medicine.

But I think even in this case, right? So I think Hannah’s very right in saying that the more well defined and very precise a task is, the more likely it is that an automated system could help, like in the tumor diagnosis. But even there– and I think as you mentioned in the book, also. Even there, there’s the issue of, well, does it work equally well for dark skinned people versus light skinned people? If you’re looking for skin blemishes and trying to figure out if that’s a sign of melanoma, there are all of these issues that come up, even in those kinds of settings where it seems like it might be simpler.

And I think the larger issue is that a lot of work in AI right now, especially in deep learning, is centered around this idea of how do we represent information? And if only we could find the right representation of our data, then the inference would be easy, and so the hard work is in doing the representation.

[INAUDIBLE] the representation of information is a very, very complicated thing. It’s not as simple as we’ll just line up a bunch of numbers in a vector space and do some machine learning on them. It’s way more complicated than that.

IRA FLATOW: It’s a kind interesting. Let’s move on. There are so many applications for this. Let’s see how many we can get through. One that was really fascinating and we have talked about many times on the program is are algorithms for use in driverless cars, right? Everybody is researching driverless cars.

And you point out, Hannah, in your book, and I’ll quote it, “when people buy a driverless car in which the driver’s new, the car might decide to murder them rather than the pedestrians.” Conundrum, as you put it.

HANNAH FRY: It’s true. You know, I think across the board, really, here, there’s a slight difference in our attitudes if we’re the one behind the wheel or we’re the ones standing in the dock, versus if you’re thinking about how the system should be for everyone overall. I mean, you’re referencing the very famous trolley problem where a car has to decide who to kill in a certain position.

And actually, in the book, I spoke to lots of people who worked with driverless cars, and they tend to kind of roll their eyes, actually, a little bit when you ask them about this trolley problem. What would it do when presented with this situation? So in the back, I try and caveat it heavily based on what they told me.

Basically, they say it will never happen. This is an unlikely situation. But then, the exact trolley problem happened to my husband about six weeks ago. So I’ve kind of gone full circle. Now, I think that, actually, we do need to discuss these problems.

IRA FLATOW: Suresh, you’re shaking your head up and down. You’re agreeing with her.

SURESH VENKATASUBRAMANIAN: So the funny thing is when the German government put out guidelines for the development of technology for driverless cars back in June, I think, of last year, they actually had a clause there– please do not frame this in terms of trolley problems.

The thing I always want to give a shout out to is Cory Doctorow’s short story called “Car Wars,” where he discusses, I think, an issue that I think is very relevant to this in the sense that his argument about driverless cars is not at all about the trolley problem or about the efficacy of the automation. It’s about control and governance.

Who gets to control the car? What happens if you hack your car? Suppose you put in a new patch that does something that you like? Are you still going to be allowed to drive your car? I think these issues of governance, when you bring in algorithms, are something that are not discussed enough.

IRA FLATOW: Well, that’s what I was leading to. I wanted to talk about the new European Union’s General Data Protection Regulation, the GDPR. They’re aware of this, Suresh, and they’re looking into trying. What is this regulation for?

SURESH VENKATASUBRAMANIAN: So Hannah’s in the thick of it back in the EU.

IRA FLATOW: I’ll get to her after you.

SURESH VENKATASUBRAMANIAN: I think we don’t– the truth is, sitting here in the US, we don’t really know how this is going to play out yet. I think we’re beginning to see signs, for example, that the right explanation seems to, depending on who you ask, come along with the GDPR. The idea that you are given a right to ask the algorithm why it made the decision regarding you.

IRA FLATOW: Oh, you are?

SURESH VENKATASUBRAMANIAN: Depending on who you ask, that may or may not be the interpretation of what the guideline says. As a technology problem, we don’t know what that means. What does it mean to provide this explanation? What constitutes a valid explanation? What constitutes a complete explanation? Is it enough to dump 50,000 pages of source code? Probably not. How is this going to play out?

It’s actually sort of a fascinating time for researcher in this area because the law has now provided us with an opportunity to sort of think through our research and how we ask these questions and how we solve them.

IRA FLATOW: Hannah, I have got 30 seconds for you to answer that question.

HANNAH FRY: Yeah. I mean, GDPR is space to put the power slightly back in hands of the individual, but at the moment, it seems like you can hide a lot of stuff in terms and conditions. And in Europe, we are essentially being drowned by terms and conditions in the last few months.

IRA FLATOW: Hannah Fry, associate professor of mathematics of cities in the center for advanced spatial analysis, University College London, author of the book “Hello World: Being Human in the Age of Algorithms.” And you can get a sneak peek of her book at sciencefriday.com/helloworld. And Suresh Venkatasubramanian is a professor in the school of computing here at the University of Utah in Salt Lake City and a board member of the ACLU. Welcome. Thank you both for taking the time to be with us today.

SURESH VENKATASUBRAMANIAN: Thank you very much.

HANNAH FRY: Thank you.

Copyright © 2018 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producer

About Lucy Huang

Lucy Huang is a freelance radio producer and was Science Friday’s summer 2018 radio intern. When she’s not covering science stories, she’s busy procrasti-baking.

Explore More