A Flaw in Human Judgment: How Making Decisions Isn’t As Objective As You Think
If two people are presented with the same set of facts, they will often draw different conclusions. For example, judges often dole out different sentences for the same case, which can lead to an unjust system. This unwanted variability in judgments in which we expect uniformity is what psychologist Daniel Kahneman calls “noise.”
The importance of thoughtful decision-making has come in stark relief during the pandemic and in the events leading up to the January 6th insurrection.
Ira talks with Nobel Prize-winning psychologist Daniel Kahneman about the role of ‘noise’ in human judgment, his long career studying cognitive biases, and how systematic decision-making can result in fewer errors.
Kahneman is the co-author of “Noise: A Flaw in Human Judgment,” along with Oliver Sibony and Cass R. Sunstein, now available in paperback.
Invest in quality science journalism by making a donation to Science Friday.
Daniel Kahneman is professor emeritus at Princeton University and co-author of Noise: A Flaw in Human Judgment.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. I’ve been thinking a lot about what drives powerful people to make, well, how can you say it, bad decisions, decisions that seem shortsighted or ignore key facts. The importance of thoughtful decision making has come into stark relief during the pandemic and the events leading up to the January 6 insurrection.
I was drawn to the research of Nobel Prize-winning psychologist Daniel Kahneman, who has made a career about studying decision making. I was hoping he would help me better understand just what’s going on. His most recent book, which he co-authored with Olivier Sibony and Cass Sunstein, is now available in paperback. It’s called Noise– A Flaw in Human Judgment.
Daniel Kahneman, welcome to Science Friday.
DANIEL KAHNEMAN: My pleasure.
IRA FLATOW: Nice to have you. All right, let’s begin talking about this. The title of your book is called Noise. What is noise? And how is it different from bias?
DANIEL KAHNEMAN: Well, the starting point, really, is that judgment is a form of measurement. We call it a measurement where the instrument’s in the human mind. And so the theory and the concept of measurement are relevant. Bias, in the theory of measurement, is simply an average error that is not zero. That’s bias.
Noise, in the theory of measurement, is simply variability. So that you could measure a line, and measure it repeatedly. You’re not going to get– if your ruler is fine enough, you’re not going to get the same measurement twice in a row. There’s going to be variability. That variability is noise.
And you can see that noise is a problem for accuracy because assume that there is no bias, that is, that the average of your measurement is precisely equal to the length of the line. It’s still, obviously, you’re making mistakes if your judgments or your measurements are scattered around the value. So that’s noise and that’s bias.
IRA FLATOW: So why do people make those mistakes? Why do we have people measuring things and then coming up with different results?
DANIEL KAHNEMAN: Well, there are several reasons. One reason in that, really, people are inherently noisy so that when you sign your name twice in a row, it doesn’t look exactly the same. We cannot, in fact, exactly repeat ourselves. We’re in a series of states, and those states have an effect on the judgments we make. We call that occasional noise. So a judge passing sentences is not the same in the morning and in the afternoon. The judge is not the same when in a good mood and in a bad mood.
And then there are two other kinds of noise. To understand the next form of noise, the easiest– well, let’s stay with the judge. So some judges are more severe than others. Some judges are lenient. We call that level noise because the level of their judgment, there is an individual bias.
But then the most interesting source of noise in that judges do not see the world in the same way, that is, if they had to rank defendants or crimes, they would not rank them alike. Some judges are really more severe with young defendants than with old defendants. For other judges, it’s the opposite. Those differences, which we call pattern noise, they’re really interesting, and they are in quite a few situations. They are the main source of noise.
IRA FLATOW: Is that because that’s where biases may influence the noise because people have different biases that makes it noisy?
DANIEL KAHNEMAN: That’s exactly it. Noise is really produced by the fact, that is, certainly pattern noise, that people have different biases.
IRA FLATOW: A lot of us have experienced that when we go to doctors, and we get a second or a third opinion. The doctors are looking at us, conducting the same tests, and yet they come up with a different diagnosis or a different prognosis.
DANIEL KAHNEMAN: There is a lot of noise in medicine. This is really one of the reasons we wrote that book is that we find a lot of noise in very important systems in society. So there are easy cases. It’s easy to diagnose a common cold. But the moment that things get more challenging, different physicians make different judgments. And very difficult cases, of course, there is a lot of noise. So noise in medicine is a big problem.
IRA FLATOW: Speaking about that, when thinking about judgments that have a wide range of decisions, I can’t help but think about the COVID pandemic. How can the concept of noise help us better understand how differently world leaders decide to deal with the virus?
DANIEL KAHNEMAN: Well, it’s one of the best examples of noise that we know, that is, leaders at all levels, from municipalities to leaders of countries, were faced with the problems, were quite similar, and they made a wide variety of different choices. That’s an example of noise. And each of them did it thinking that they were doing the right thing. But obviously, they couldn’t all be doing the right thing if they were doing different things in the same situation.
IRA FLATOW: So how might leaders then be able to make better decisions and reduce noise around the very complicated decisions that need to be made about COVID?
DANIEL KAHNEMAN: Well, we have a piece of advice that is unlikely to be taken up very soon. But our advice is that, in the case of COVID, it’s a matter of designing how you’re going to make the decision and doing it making the decision in a disciplined way. When you design the process by which you will reach conclusions, then you are going to have less noise. People are more likely to reach the same conclusions if they all follow a sensible process to get to the decision.
There is one source of noise that is not going to be controlled by that, and this is differences in values. So if people want different things, then they will reach different judgments. But if you’re faced with an objective problem, you’re trying to control the number of hospitalizations, that’s a problem where the value is pretty obvious with a systematic process of decision making. People ought to and, we think, would be less noisy than they were.
IRA FLATOW: When talking about making these decisions, what about using artificial intelligence or machine learning? There was a study that came out last year showing that the AI was better than the dermatologists in detecting melanoma. How does AI reduce noise in decision making?
DANIEL KAHNEMAN: AI does better than reducing noise. Any algorithm, any systematic rule that takes inputs and combines them in a specified way, will have one crucial property– it will be noise-free. You present an algorithm with the same problem twice, you’re going to get the same answer.
But in general, algorithms are noise-free. And it turns out this is one of their major advantages over humans, that is, when you compare the performance of people to the performance of algorithms and rules, in many situations the algorithms and rules are already superior to people or match people. And the main reason for the lack of accuracy of people compared to algorithm is noise. People are noisy. Algorithms are not.
IRA FLATOW: But you’ll get pushback from doctors or other people who say, every patient is different. I have to treat every patient differently, and that takes a human interaction. How do you answer that?
DANIEL KAHNEMAN: Well, I answer that by looking at data and by comparing mistakes, the number of mistakes that are made. And it is true that humans have that tendency of viewing each case as unique. But it’s also true that if you take just a few objective measures in the situation and you combine them appropriately, in many situations an objective combination of scores is going to do better than a human judge, although the human judge has access to a lot of information and has many powerful intuitions.
IRA FLATOW: I hear that same kind of argument about how AI is better than people. When I talk to AI people who are designing self-driving cars, they say, we get a lot of pushback that the AI is not smarter, but if you look at the data, you’ll see that a computer will drive a car better than a person, meaning that there’ll be fewer accidents.
DANIEL KAHNEMAN: Well, all of us are biased against algorithms. And the reason we are is that when a self-driving car causes an accident, we look at that accident and we say, oh, I wouldn’t have done it. A human driver would just not have made that mistake. But of course, no one asks the self-driving car about the mistakes that humans made.
And the same is true in all contexts. Where you measure the performance of people against the performance of algorithms, the question is overall accuracy. But the way that people look at it, mistakes that artificial intelligence makes look stupid to us. They are mistakes we wouldn’t make. And the fact that we make more mistakes, overall, than the AI, that’s not something we respond to.
IRA FLATOW: One of the ideas that stuck out to me in the book was about overconfident leaders who too heavily trust their own intuition instead of weighing evidence or are too confident in the decision that’s more due to chance than their own judgment. What’s going on here?
DANIEL KAHNEMAN: Well, what’s going on is that most of us are overconfident most of the time. And in a way, it’s a very good thing. By overconfident what I mean is that we look at the world, and we see the world in a particular way. And we feel a sense of validity. We feel that the reason we see the world as we do is because that’s the way it is.
But we cannot imagine that other people looking at exactly the same situation would see it differently because I see the truth, and I respect your judgment. I expect you to see exactly the same thing that I do. Now, that’s one aspect of it. Overconfidence is almost built in.
But overconfidence in intuition is, in a way, particularly pernicious when it’s not justified. Now, there are cases where intuitive expertise exists. So chess players can look at a chess situation, and every move that occurs to them is going to be a strong one. But people feel they have intuitions when there is no way that they could have correct valid intuitions.
For example, anybody who makes predictions about what will happen in the stock market to individual stocks, in particular, is just deluding himself. It’s not possible. And yet people feel that it is possible. They have intuitions, and they trust them, and it’s a big problem.
IRA FLATOW: I’m Ira Flatow. This is Science Friday from WNYC Studios. If you’re just joining us, I’m speaking with Nobel Prize winner Daniel Kahneman about some of the flaws in human judgment. One of the things I’ve been batting around a lot lately is what biases lead people to believe something that is patently false, specifically how so many people bought into the big lie that Donald Trump really won the election and then the ensuing insurrection of January 6. What makes people believe in an easily disputable lie so fully?
DANIEL KAHNEMAN: Well, we have the wrong idea about where beliefs come from, our own and those of others. We think we believe in whatever we believe because we have evidence for it, because we have reasons for believing. When you ask people, why do you believe that, they are not going to stay dumb. They are going to give you reasons that they’re convinced explain their beliefs.
But actually, the correct way to think about this is to reverse it. People believe in the reasons because they believe the conclusion. The conclusion comes first. And the belief in the conclusion, in many cases, is largely determined by social factors.
You believe what people that you love and trust believe, and then you find reasons for it. And they tell you reasons for believing that, and you accept the reasons. But it’s largely a social phenomenon. It’s not an error of reasoning.
And that, by the way, is true for your beliefs and my beliefs. Your beliefs and my beliefs reflect how we’ve been socialized. It reflects the company we keep. It reflects our belief in certain ways of reaching conclusions, like a belief in the scientific method. Other people just have different beliefs because they’ve been socialized differently. And because they have different beliefs, they accept different kinds of evidence, and the evidence that we think is overwhelming just doesn’t convince them of anything.
IRA FLATOW: Are there cases in which variability in judgment is actually a good thing?
DANIEL KAHNEMAN: Oh, many cases, that is, we define noise– and that’s important, we define noise as unwanted variability so that when you have underwriters in an insurance company looking at the same risk, you would want them to reach approximately or exactly the same conclusions. But I want variability in the judgments of my film critics. I want variability in the judgments and opinions of people who are creating or inventing new things. So variability is often very desirable. But in some contexts, variability is noxious.
IRA FLATOW: One last question. I’ve been following your career for a long time, and I’ve always wondered what got you and your long-time former psychologist partner, the late Amos Tversky, so interested in human biases and studying? Where did you fellas decide this was something you wanted to study?
DANIEL KAHNEMAN: Well, it was really ironic research. We found that we were prone to mistakes. It was all about statistical thinking when we started. And we noticed that we had wrong intuitions about many statistical problems. We knew the solutions, and yet the wrong intuitions remain attractive.
IRA FLATOW: Can you put a finger on why we have so many flaws in our intuitive judgment?
DANIEL KAHNEMAN: So it’s not that you could– we could perform surgery and excise all the sources of biases from human cognition. If you removed all the sources of biases, you would remove a great deal of what makes cognition accurate in most situations. So we are built to reach conclusions, not necessarily in a logical way, but in a heuristic way.
And heuristic ways of thinking always necessarily lead to some mistake, although, on average, they could lead to correct judgments and faster than reason would do. It’s not that we’re studying incorrect mechanisms. The mechanisms are very useful. They sometimes, that mechanism which is usually useful, will lead people to systematic errors.
IRA FLATOW: Well, thank you very much, Dr. Kahneman, for taking time to be with us today.
DANIEL KAHNEMAN: It’s a pleasure talking with you.
IRA FLATOW: Daniel Kahneman, Nobel Prize winner professor emeritus at Princeton University, is the co-author of the book, Noise A Flaw In Human Judgment. If you want to hear more from Daniel Kahneman and how he approaches his work, go to sciencefriday.com/noise to watch a profile of him from our Desktop Diary video series back in 2013.