The Uncertain Science Behind What We Understand As ‘Truth’
17:12 minutes
Throughout history, humans have been on a search for truth. From the ancient Greeks and their belief in a universal truth, to our Founding Fathers writing, “We hold these truths to be self-evident, that all men are created equal.” In a world of disinformation, conspiracy theories, and the rising influence of artificial intelligence, where does truth fit in? Mathematician Adam Kucharski, author of Proof: The Art and Science of Certainty, joins Host Ira Flatow to discuss the complicated truth.
Read an excerpt of Proof: The Art and Science of Certainty.
Invest in quality science journalism by making a donation to Science Friday.
Dr. Adam Kucharski is a mathematician and author of “Proof: The Art and Science of Certainty.” He is based in London.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. One of the foundational qualities of the human race is a search for truth. We can trace this through history, from the ancient Greeks and their belief in a universal truth to early scientists like Isaac Newton, uncovering core concepts in physics and math, to our Founding Fathers writing, “We hold these truths to be self-evident, that all men are created equal.”
Feels like these days, our relationship to the truth is a little different. In a world of disinformation, people who have their own different set of facts, the rising influence of artificial intelligence, where does the truth fit in, and how do we determine what it is? Joining me to discuss is my guest, Adam Kucharski, author of proof the art and science of certainty. He’s based in London. Welcome to Science Friday.
ADAM KUCHARSKI: Thank you for having me.
IRA FLATOW: You’re quite welcome. What’s your personal relationship with uncertainty? Are you comfortable with it?
ADAM KUCHARSKI: I think it’s something I’ve become increasingly comfortable with. So my background was in mathematics, which is obviously this world of supposed certainty and proofs that last forever. But increasingly, I’ve moved into the world of real-world data, real-world problems. And I think that’s really made me much more aware not only of the uncertainty we have in trying to work out what’s happening, but also the subjectivity that people can have in the level of evidence you need to eventually act on something.
IRA FLATOW: And where do we set that line?
ADAM KUCHARSKI: It varies quite a lot. One of the things that has kind of become a bit of a tradition in statistics– and it can be traced back about a hundred years– is this idea that don’t want to set the bar too high, because then things that might be interesting or useful, you’ll ignore. But you don’t want to set it too low and let a lot of things that are false through.
And one of the things we’ve seen is convergence in statistics, a lot of the medical literature, around this 5% value. You want there, if you’re doing a study, to be a less than 5% chance that you’d get a result as unusual, as extreme as that.
IRA FLATOW: So it’s the 95% certainty.
ADAM KUCHARSKI: Exactly, yeah. And so you see that in medical papers, in trials. You see that in experiments. But it’s become this kind of hard cutoff. And it was fairly arbitrary, how it’s defined. It was actually just because it made the maths a bit easier. But of course, the type of certainty we need varies a lot with the problem. And other statisticians in the 20th century, particularly William Gosset, who was working at Guinness, took a far more pragmatic outlook.
And they said, well, actually, there shouldn’t be this magic value, above which we’re convinced and below which we’re not. Because it will depend on, well, what’s the benefit of the change we’re potentially going to make? What’s the cost it might have? How hard is it to go out and get more evidence? And I think we see that even in our daily lives, that something that isn’t particularly costly for you to experiment with, you’re probably happier to make that change with lower levels of evidence, whereas if something’s going to be enormously difficult or cause you a lot of headaches if you get it wrong, you’re going to want a higher level of certainty.
IRA FLATOW: But people have a lot of difficulty with uncertainty, don’t they?
ADAM KUCHARSKI: I think it’s something that humans love to go out of their way to avoid. There’s even this nice study. It was internally in the CIA in the 1950s. And Sherman Kent, one of their analysts, had written a report that concluded there was a serious possibility that the then USSR might try and invade what was then Yugoslavia. And when he was talking to others, he realized that everybody had a different notion of what that term meant. And people would talk about probable and possible. And he got quite frustrated cause he said that, actually, people will kind of go out of their way to avoid being pinned down on something.
So I think one of it is, if there’s uncertainty about the world, we don’t like having to deal with that. I think also, you can get situations where things can be very counterintuitive, even in terms of how you balance different types of errors or how you might want to have to, say, update your belief based on different information. That can often create a lot of tension because it might not behave in the way we expect it to, either whether we’re trying to for, example, convince others.
Or in the case of some medical tests, if you have things where there’s a small false positive and not many people have a disease, your interpretation of that test result might be very different. You might think, oh, I’ve got a positive test. That means I’ve probably got the disease. But actually, the numbers can combine in slightly nonintuitive ways.
IRA FLATOW: Right. Not only are you a mathematician, but you’re also an epidemiologist. And that opens up a whole bunch of stuff to talk about, especially the first few years of the COVID pandemic. Tell me how you understood what truth, certainty looked like in those early days.
ADAM KUCHARSKI: Yeah. COVID was just this situation where a lot of this knowledge we, to some extent, had to build from scratch. We had examples of it from other pathogens, from other situations. But it was something where you didn’t get the benefit just to wait and see for a few years. In that sort of situation, not making a decision is making a decision.
So a lot of the work that we did was building up that early evidence around everything from the severity, the extent of superspreading, the characteristics of early variants. And I think one of the things, working a lot with evidence to inform policy, I became very aware of is that in some cases, you might have quite a lot of uncertainty, but you can still say something useful as a conclusion.
So to take the emergence of the alpha variant or the delta variant, it was extremely hard early on to pin down exactly how much more transmissible it is. It might be 20%. It might be 40%. It might be 60% But actually, all of those give you the same conclusion– that you’re going to be seeing a rising epidemic, and you’re going to be getting into trouble. And so I think for that, again, it was that conversion between something that might feel like quite an uncertain problem, but actually saying, can we at least know which side of the fence we’re on and simplify in a way that’s still useful for politicians?
I think also, though, because it was such a public crisis that affected so many people– and knowledge did emerge over time. I think we saw some examples in real time where there was overconfident statements about certain features of the pandemic that worked badly, perhaps because governments didn’t want to acknowledge the unknowns. But on the other hand, sometimes communicating that uncertainty was important because the situation was going to change.
IRA FLATOW: Was that, though, the case? Did public officials communicate this uncertainty successfully, or was there a better way to do this?
ADAM KUCHARSKI: I think we saw some countries do it a bit better. Particularly, places like Denmark and Singapore stood out, where policy was going to have to change. And if you had an emerging variant, for example, you might know not the exact risk of that. You have to decide what you’re going to do about it, and then you might want to modify that. And I think they were much better in communicating that this is what we’re going to do at this point in time, and this is what we might update. I think other places were a bit more focused on saying, this is the policy. This is what we’re going to have to do. And then when they pivot, people get a little bit confused about that.
IRA FLATOW: Do you think this ultimately led some people to lose trust with science?
ADAM KUCHARSKI: We’re seeing a bit of a mixed picture because under certain metrics, trust in science is still high relative to a lot of other industries. I think what we’ve seen, though, in recent years is a lot of things intertwined together. There’s been useful research in recent years that, often, these things are not just in isolation, that your relationship with science or your relationship with public health authorities is going to be interlinked with your relationship with other institutions, judicial systems, governments, and so on.
I think it also links a bit– and I’ve seen it firsthand a lot from people who sent scientists angry messages– in your relationship with wider consensus. So I gave a talk on conspiracy theories earlier in the year. And it was really striking how many of the comments– first of all, there was a lot of detail. This wasn’t just one-off random comments. But it was also– there was this very strong idea of, we’re seeing a truth that other people aren’t seeing. It’s almost a feeling of community underlying it.
So I think there’s a lot of these other dynamics, in terms of how it influences your relationship with power, institutions, authority. Are you actually this community that have this hidden truth? I think there’s a lot of other dimensions beyond just pure trust in a scientific fact.
IRA FLATOW: Well, you, as a scientist, are you frustrated with the politics of truth here?
ADAM KUCHARSKI: I think one of the things that becomes very challenging is where scientific evidence and political choices get very intertwined. And I think an obvious example of that is where science is either constrained or undermined to seek a sort of political or commercial goal. And I think that’s what we’ve seen if you look back at things like smoking, for example, a huge effort to undermine a lot of the scientific evidence.
I think it also becomes challenging– I think we saw during COVID, follow the science was this sort of mantra. And I think a lot of it was essentially politicians using it as cover for not having to make very difficult decisions. The science could only take you so far. And I think one of the things that we were always thinking a lot about how to present is, you have essentially a series of bad options. And I think anyone who says that there was a simple solution to the pandemic is wrong, that there was a bunch of very difficult trade-offs.
And actually, even if you look back to the 1918 pandemic, it’s fascinating how many of those newspaper quotes you could have just pulled out of 2020. It’s just arguing about which bits of society should be valued in different ways. And then it’s very much on politicians to make those decisions and, yeah, as a society to weigh those things up.
And I think, increasingly, a lot of that has got bedded more on, that was a scientific choice, when actually, science can’t weigh up all of those features of society in those kind of ways. And in my view, I don’t think it should. I think something like epidemiology is one thread contributing to that wider, very difficult decision.
IRA FLATOW: Well, we’re seeing that not only medically, but we’re seeing that with the environmental situation, with the denial of climate change and the climate crisis, like, oh, don’t look over there. Let’s just ignore that and do away with it, like it doesn’t exist.
ADAM KUCHARSKI: Yeah. It’s also just really striking, the levels that the disagreements have. And I think if you look at something like climate change, I think the healthy debate is where you have the documented evidence about the situation we’re facing. You have the evidence which has a lot more uncertainty about what we might do about it. And then you have the policy decision about what we should do about it.
And I think that was one of the things that was striking– I talked to quite a lot of climate scientists for the book– that there is very good scientific consensus on the what is happening. There’s less consensus on, of all these policy levers, exactly which ones are going to have which effect, what should we be doing about it. But I think what’s been happening, as you say, is that people have gone back to those fundamentals and rather than having debates– and perhaps because they don’t like some of those changes that can be required– they go down more fundamentally.
And we see that with other interventions as well, that I think, understandably, for COVID, there were some people who were not very keen on things like vaccine mandates because it was seen as an infringement of freedom. But rather than just talking about the policy, it went back to, OK, well, vaccines don’t work, or claims that the pathogen itself isn’t very severe at all.
And I think, ultimately, it’s dislike for the policy, which then you get people trying to target the system further down instead, even though for something, whether we’re talking about the severity of something like COVID or the extent of hazard that climate change is facing, that foundational evidence is very strong.
IRA FLATOW: There are a lot of things about science and medicine that we don’t the truth behind. Simple things like anesthesia– we know it works. We don’t how– same goes for some medicines. And in physics, we know quantum physics works but not why. Richard Feynman, a very wellknown and respected Nobel Prize scientist, said, if any scientist tells you they why it happens, they’re lying. So we know it works, but we don’t know why. Should we still consider these things the truth, even if we can’t verify the why?
ADAM KUCHARSKI: I think that’s a really good point. And I think it gets to the heart, actually, a lot of tension with our interactions with technology, and particularly in the modern era, the extent to which it’s important to have confidence that something works versus that understanding of why something works. And also, it really just gets to the heart of what is science in the modern era.
As you said, there’s elements of medicine where we don’t understand exactly all the underlying processes. But we have confidence if we do this, we’ll get this effect. And even if you run a clinical trial, it will tell you, with often good confidence, whether or not something has the effect you’re testing. But it won’t necessarily give you that why you need to get that from other sources.
And I think in other fields, in things like AI, for example, people are much more uncomfortable with, say, self-driving cars that have accidents that we can’t explain. So even if a self-driving car was, on average, much safer than a human– which isn’t, let’s be honest, massively difficult, because humans do a lot of very strange, unhelpful things when they’re driving– I think there would still be that discomfort about that kind of lack of understanding.
And I think we’ve also seen it talking to scientists who’ve worked on a lot of AI discoveries, so some of the protein prediction work, where I think– and I’ve certainly had it myself– that growing up with science, you have this idea that there’s a really elegant theory, and you can understand it. I think even the mathematician in me– you want to get pen and paper out and solve it, or you want to be able to do the tangible experiment.
Abraham Lincoln, for example, taught himself all the Greek mathematical proofs because he wanted to get better at demonstrating things. And that era has kind of moved on. And we’re very much going to an era, both in maths– a lot of the proofs that involve computers now and can’t really be easily verified by hand– or in AI discovery and science, where there isn’t that simple explanation.
But we can have predictions that can be enormously powerful and still having to come to terms with, well, this is science. And just as we have these medical tools that are very valuable, we can have scientific discoveries we can have confidence in. But we lose, perhaps, a little bit of that kind of romanticism around the elegance of our explanations.
IRA FLATOW: Right. Well, as we go forward into this technological future, are there any lessons we can take from our predecessors?
ADAM KUCHARSKI: I think one of the things that really stands out for me is the dangers of assuming certainty. I think we’ve seen it in recent years with this assumption that certain algorithms have reached superhuman status, for example, in games, which are very well defined, should, in theory, be a perfect starting point for AI to reach that kind of mastery– that, actually, people who poked around have found that that’s not true. And I think there’s an analogy there, even going back about 150 years with that assumed idea that maths had solved it all, and we had these universal truths. And people start to poke around and find examples where that doesn’t work.
I think also, being really aware of those balances we’ve got to strike– even in discussion of misinformation, we focus a lot on not wanting people to believe falsehoods. But there’s two errors you can make with information. One is believing things are false, but another is not believing things that are true. And so there’s, I think, increasing awareness that interventions that focus just on false information might work, but they do it by just reducing belief in all information.
So we need to focus on the fact that, for many of these problems, there’s actually two errors in many cases that we need to be balancing. And we don’t inadvertently want to intervene on one but also undermine our trust in other information systems along the way.
IRA FLATOW: Well, Adam, we have run out of time talking about so many things. And I want to thank you for taking time to be with us today.
ADAM KUCHARSKI: Yeah, great to chat. Thank you.
IRA FLATOW: Adam Kucharski, author of Proof, a really, really good book, a good read for the summer, Proof, The Art and Science of Certainty. He’s based in London. And you can read an excerpt from the book. Head over to our website, sciencefriday.com/proof.
Copyright © 2025 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Kathleen Davis is a producer and fill-in host at Science Friday, which means she spends her weeks researching, writing, editing, and sometimes talking into a microphone. She’s always eager to talk about freshwater lakes and Coney Island diners.
Ira Flatow is the founder and host of Science Friday. His green thumb has revived many an office plant at death’s door.