When Scientists Get It Wrong
A couple of years ago, Julia Strand was trying and failing to replicate a study she’d published. At the time, she was an assistant professor without tenure, and the original study had presented her most exciting finding to date. But when she and her co-authors tried to replicate it, they got the opposite results. Then one night, Julia discovered why. In her original code, she’d made a tiny but critical error, and now, with her reputation and job on the line, she was going to have to tell the world about it.
Science is often said to be “self-correcting”—through peer review, replication, and community dialogue, scientists collectively find mistakes in their work, and continually revise their understanding of the world. But what does self-correction look like in practice? And how likely are scientists to admit they’re wrong?
Julia eventually submitted her story to the Loss of Confidence Project, which invited psychologists to publicly admit mistakes in their published research. Our guest, Julia Roher, a lecturer in psychology, organized the project, along with two others. In an anonymous survey of 316 researchers, almost half said they had lost confidence in one of their findings, but ultimately, only 13 researchers submitted public testimonials to the project.
Brian Resnick, who co-created Vox’s Unexplainable podcast and has written about intellectual humility, explains why we often think we’re right when we’re wrong, how others perceive us when we fess up to mistakes, and what all this means for our trust in science.
Listen to Julia Strand talk about discovering—and owning up to—a major mistake in her own work.
Brian Resnick is senior science reporter & co-creator of Vox’s Unexplainable podcast.
Julia Rohrer is a Lecturer at the Department of Psychology, University of Leipzig.
Julia Strand is an Associate Professor of Psychology at Carleton College.
JOHN DANKOSKY: This is Science Friday. I’m John Dankosky. You’ll often hear that science is self-correcting. Even though science doesn’t always get it right the first time, researchers collectively catch their mistakes and correct them, and are constantly updating what they believe to be true based on new evidence.
But what does self-correction really look like? And how willing are scientists to admit when they’re wrong? A few years ago a psychology researcher named Julia Strand discovered a big mistake in her own work. It was in a study that had gotten her lots of positive attention. There was even talk of a new app based on it.
But when they tried to replicate it, it just didn’t work. And then one night Julia’s sitting at her laptop, trying to figure out what went wrong, and she notices a tiny error in her original code.
JULIA STRAND: So when I realized what had happened, I mean first I saw that little thing and thought, oh, wait, is that what did it? No, no, don’t panic yet Julia, that might not be it. And I tested a couple of other things, and then it was time to panic. Yeah. I had made this mistake.
So the bottom dropped out of my stomach. I started crying, and I just started realizing all of the consequences that this was going to have. So if– bringing this to light would mean telling my research students, telling my co-authors, telling the chair of my tenure committee and the Dean of the College, because I was currently under review for tenure. Like my committee was meeting that month to decide whether I should have a job for the rest of my life or get fired.
Oh my. And for just a moment, she thought, what if she just didn’t tell anyone? But obviously in the end, Julia came forward. Now you can hear more about her study and what happened when she told the world about her mistake on the Science Friday podcast.
We heard about Julia’s story because she submitted it to something called The Loss of Confidence Project. It’s an initiative that invited psychology researchers to fess up and correct their own mistakes. So today we have two guests to talk about this. Another Julia, Julia Roher who is a lecturer at the Department of Psychology at the University of Leipzig who organized the Loss of Confidence Project, and Brian Reznik, senior science reporter and co-creator of VOCs Unexplainable podcast. I’d like to welcome you both to Science Friday.
JULIA ROHER: Hi.
BRIAN REZNIK: Hey.
JOHN DANKOSKY: So Brian, in your writing, you’ve made the case for intellectual humility. So first of all, what do you mean by intellectual humility? What got you thinking about this?
BRIAN REZNIK: Yeah, intellectual humility is just the trait. It’s the way of thinking of thinking this– just the simple question, what if I’m wrong? It’s a way of measuring your own confidence in your own thoughts. And what really fascinated me here, so I’ve been reporting on psychology for a while, and watching psychology go through what they call the replication crisis. Kind of the clip we just heard is a little bit of an example of that. Of studies that upon reinspection aren’t holding up.
And so I was just fascinated by this whole field, trying to become more intellectually humble. Trying to be more willing to admit it when they were wrong. And I think the loss of confidence project, like when I first came across it, it’s like, ah, this is so interesting. That on paper it sounds so beautiful and pristine, that in science it’s self-correcting. When you see an error you report it.
But also like these are people doing science. We have incentives. As in people’s careers, they have incentives to be confident and to continue on the path they are on. So that’s just good drama. It’s a good story. And it’s also something I think a lot of us can relate to.
JOHN DANKOSKY: Julia, tell us, how did the Loss of Confidence Project start?
JULIA ROHER: So the project started when actually somebody who wasn’t involved in the project [INAUDIBLE] posted essentially a loss of confidence statement. So she was one of the original authors of the paper on power pose that supposedly demonstrated that taking a particular expansive posture really boosts people confidence, increases risk taking, and even affects their hormonal levels. And so she actually had moved on from that type of research, but she always got student requests, because students really wanted to work on that.
So at some point she was just, like, well, whatever I’m going to post a public statement that I’m going to refer students to, and in that statement she simply explained how the study actually came into being. So how they analyzed the data, and analyzed it, and cherry picked a finding that sounded particularly good. And so she simply explained how she now understood that these practices lead to many false positive findings, and that she no longer actually believes in that effect.
And so when she posted that statement, it got a lot of attention by other psychologists who started to talk about it in a Facebook group. And a lot of the people were like, wow, this is amazing. This is how science should proceed. And I want to make such a statement as well. But how can we do that? And so out of that idea there was like a collective effort to start a project that people can submit similar statements and publish them together.
JOHN DANKOSKY: So what kind of submissions are you asking people for?
JULIA ROHER: So in particular we ask people to describe a finding in which they had lost confidence, but we wanted a special type of loss of confidence. So first of all the original finding needed to be in some way novel. And then the submitting author had lost confidence in the primary or central result of the article, because of a theoretical and methodological problem. So we really wanted people in a situation where they have to say, ah, I screwed up, I made a big mistake, and it’s on me.
JOHN DANKOSKY: And just to clarify, right, we’re talking about honest mistakes here that have later come to light. Not any deliberate manipulation of data.
JULIA ROHER: Well, I guess somebody could have submitted a deliberate fraud case, but of course the people that came forth with people who now knew that they did something wrong. So it’s mostly people who didn’t understand what they were doing at the time, but in hindsight see how their practices were problematic.
JOHN DANKOSKY: And Brian, I can see why you think that this is such an elegant idea. This could be really important for more scientists to engage in.
BRIAN REZNIK: Yeah, it’s a cornerstone of science. Re-evaluating old claims, doing replication, seeing if things check out. But it turns out in practice this is really hard to do. It’s really hard to do an about face, as we heard before. It’s hard to admit it when we’re wrong. And I think there’s a lot of interesting reasons why that is. And I actually, I commend psychology for a lot of– there’s this reform movement in psychology that’s really interested in this problem, and they’re interested in it I think because they’re psychologists. Because they know they are people studying people. And why not cast some light on themselves?
JOHN DANKOSKY: Could you talk more about that idea of how humans tend to think that we’re right an awful lot, even when we’re really, really wrong?
BRIAN REZNIK: Yeah, so this is what I think is the most interesting central problem here, is that we often don’t know when we’re wrong. Like we don’t often have like the perceptual abilities to identify errors. We don’t remember what we’ve misremembered. There’s also this concept called naive realism, which is that when we’ve come to our perception, our brains are kind of these organs that make guesses based on our faulty sensory organs. And sometimes they make a wrong guess. But when they make a wrong guess about what we’re seeing or what we’re perceiving, like our brains don’t tell us it’s a wrong guess. It just seems like it’s a right guess.
So for me this is such a fascinating problem, not just for scientists, but for everyone, really. Because there’s always going to be something like a little bit behind the veil of what we don’t know. And I think just starting with the curiosity of like, what’s behind there, and is it might be scary. It might be it might implicate our own work, it might change the how we think about things. But I think just being curious about the things we’re missing.
And then once we discover what we’re missing, because we were curious about it, having the convictions to act on it, to send in a loss of confidence statement. To say I was wrong. There’s so much in our culture that rewards bluster, and rewards just talking off the top of your head, and confidence. And I really would hope that intellectual humility for people who are kind of sick of that arrogance they see in society, to take in to themselves.
JOHN DANKOSKY: Well, and there’s a lot there, too. There’s the fact that bluster is often perceived very favorably certainly in politics, and I’m sure that we can talk a little bit more about that. But there’s just also this idea that in some cases, humans when faced with something that they don’t know, they can almost be more confident about saying something that they’re not sure about then something that they’ve studied, say, for years and years. Is this a very human trait.
BRIAN REZNIK: Yeah, I think you’re referring to is the Dunning Kruger effect, where people who perform poorly on a task tend to overrate their ability on that task. And then people who perform really well sometimes underrate their ability on that task.
JOHN DANKOSKY: Julia, let’s get back to this idea of self-correcting. When we talk about science being self-correcting, we’re talking about collectively, usually, scientists correcting each other. Why did you decide to focus on individuals correcting their own work?
JULIA ROHER: So in principle I actually think that the– so the collective self-correction should be sufficient. So even if no scientist ever self-corrected their own mistakes, things could get cleaned up in the next generation and so on. But I believe that process would be horribly slow and tedious, and maybe involve a lot of hostile interactions between researchers.
And so if we are talking about individual self-correction, there’s that huge benefit that the individual knows all the details about the study, has the best overview over the literature, and would be just much more capable of spotting mistakes. And so I think the individual is in a privileged position to point out errors, but at the same time the individual has all the incentives to just hide them.
JOHN DANKOSKY: Yeah, those incentives, as we heard in Julia’s story, are very, very strong. People have all sorts of incentives. She was just talking about things like tenure, professional reputation. These are really big barriers toward people wanting to step forward.
JULIA ROHER: They are indeed. And but there’s one particular thing in psychology, and that is a bit funny. And so people are very scared of retractions. So that is like the worst case scenario. That your published article gets retracted from the literature.
And it turns out there are actually empirical studies looking at the impact of retractions, and what those studies kind of consistently find is that if authors self-retract their work, it does not seem to harm their reputation. So it has only very small effects on citation metrics of previous work for the same authors. And it seems to have very little impact on their reputation within the field, or even sometimes a positive impact, that they are respected more because people see how much they care about science.
JOHN DANKOSKY: When you asked people to share their mistakes publicly in this project, how many people came forward?
JULIA ROHER: So, that is an excellent question. So it started very slowly. So we got like one, two, three, four statements, and then it kind of just got stuck there. And so at some point when we had a handful we were like, we really need to like motivate more people. So we actually published a preliminary version of the project with those statements included to encourage more people. Look, like, you won’t be alone if you submit here. There’s already a handful of people.
And in the end we also contacted some additional people and so on, and we ended up with 13 written statements. And now depending on how you look at it, that is either like a lot, because there is no similar project, so this is the largest collection of loss of confidence statements to date. Or actually you could also say there should be hundreds of people with similar statements to make that did not participate in the project.
JOHN DANKOSKY: Brian, how do you perceive those numbers? Does that sound like a success or is there a lot more people out there who maybe could step forward?
BRIAN REZNIK: Yeah, well, the disquieting thing behind this is that there have been these large studies looking at the replicability of psychological science, and finding like, oh, maybe 40%, 50% of papers replicate. These aren’t definitive numbers. But, yeah, there’s definitely more wrong papers out there than there are wrong papers acknowledged. I would feel confident saying that.
JOHN DANKOSKY: I’m John Dankosky. And this is Science Friday. From WNYC Studios. But Brian, I’d like to talk more about this idea of how admitting mistakes affects how we’re perceived. And I’m going to guess that is a very different thing depending on the field of study, depending on the– I don’t know. The field of life that you’re in.
For instance, in a personal relationship, saying that you’re wrong sometimes is a very positive thing. In politics, as we’ve found, it’s often a very damaging thing. We often want people in politics to step forward and say that they’ve been wrong, but that almost never actually happens, and sometimes when they do it hurts their political chances. Talk a little bit more about how hard it is for people to admit when they’re wrong.
BRIAN REZNIK: Yeah. So it would be impossible to put an overall rule on this. And I wouldn’t be so arrogant to say that there is one way to admit mistakes, but I think at least when you look at these more small slice examples, like the Loss of Confidence Project, like I saw as a reporter outside observer here, I saw this project being commended and being really applauded. And people, especially when you’re talking about scientists who, these are a part of their ideals, or their stated ideals, at least. So I like to see them being applauded for things that their culture should be applauding. Like, I think that makes sense.
Politicians, I’m not a political science expert. So I don’t really know the impact of a politician admitting they’re wrong. There’s often a news media cycle that maybe it reflects how people feel about the politician. Maybe it just reflects like a juicy story. It’s hard to know sometimes.
But I think in our personal lives at least, it’s easy to overrate the negative perceptions we might garner if we admit wrong. And I think at least, also at the end of the day too, like the truth is really useful. Like this isn’t just about being virtuous. It’s like, when we admit were wrong about things, like truth– whether it’s in science, whether it’s in politics, in society, like we can use truth to move forward. As like admitting the reality of climate change is useful. And if you’re wrong about that, you should admit it, because it’s not useful to deny it.
So they’re like the kind of personal feelings about it. And yes, I think there are studies that people tend to overrate how negatively they’ll be perceived when they admit they’re wrong. But also it’s just good, because truth is good.
JOHN DANKOSKY: I like that as a bumper sticker. Truth is very useful.
BRIAN REZNIK: I hope so. It can be upsetting sometimes, but it’s always useful.
JOHN DANKOSKY: Julia, very quickly before we have to take a break, psychology, as you said, is a bit on the forefront of this idea. How do you think that it’s affected positively or negatively the reputation of the field of psychology?
JULIA ROHER: That is a very good question. I’m not sure I can answer that, because I’m an insider, right, and so I’ve got an insider view. And so in my opinion it seems like so people have strong opinions on psychology, of course, and those can go either way. And I’m not sure whether the reputation crisis has left a huge impact on the public perception. Maybe in some circles people who have closely followed the news, they will have gotten the impression that something is wrong.
But I think the important thing is to note that similar problems exist in other fields, and they might just not get discussed publicly in those fields– not yet at least. And so I really think in the bigger picture it doesn’t really matter whether it will affect how people perceive psychology, because their perceptions will just become more accurate if you talk more openly. And I think similar processes might start for other fields as well.
JOHN DANKOSKY: Do you think that we can become better at self-correcting over time if we employ more of these principles?
JULIA ROHER: I think it is possible to foster a different culture where people feel more open about things, and are more willing to admit their mistakes. So I very much believe that such a cultural transformation is possible in the long run.
JOHN DANKOSKY: We’re talking with Julia Roher and Brian Reznik about self-correction and intellectual humility in science. We’re going to take a short break. When we come back, we’re going to talk more about this. This is Science Friday, from WNYC Studios.
This is Science Friday. I’m John Dankosky in for Ira Flatow, and we’re back with Julia Roher and Brian Reznik. We’re talking about self correction in science. We’ve heard a lot about psychology’s replication crisis, that some of the coolest findings turned out to be wrong. But this isn’t just a psychology problem. I guess big picture, and I’ll start with you, Julia. How do we know what to trust when people are changing their mind so much over time?
JULIA ROHER: That is a very hard question. The thing is right, so there is that huge discourse that people should just trust scientists and trust the science, but at the same time, we see that scientists keep making mistakes and keep self-correcting, and that has become particularly salient during this pandemic where really the official recommendations have completely changed over time.
And so I believe it’s still possible to trust science, but I think people need to have an understanding that self-correcting or mistakes are very much part of science. So they are very much part of the process, and you can’t get perfect science. So science can’t always get it right on the first try. So while I believe that people should, of course, trust science, I think they should have realistic expectations for what it can do in situations with huge uncertainty.
JOHN DANKOSKY: And Brian, I think that this is such an important question as Julia says, is big. One of the reasons it’s so big right now is, if we take the example of Julia Strand that we played earlier, she had an interesting but not necessarily a life altering finding that she had to go back on. But when we’re talking about COVID drugs and millions of people dying in a rush to save both lives and economies from collapsing, you can imagine that this idea of intellectual humility is put under very different pressures when that’s what you’re dealing with across the globe.
BRIAN REZNIK: I think what we’ve seen during the pandemic is that we just haven’t known a lot of the answers when people want to know them. And lately the like lab leak, I put that in scare quotes, hypothesis has been in the news. And people still don’t know where COVID came from. And we might not ever know. For me, the big picture here is never about, oh, scientists don’t know anything. Scientists have learned things for sure. It’s that knowledge is really hard won, and there are different levels of evidence for things, and it’s more often the case that we’re in the middle of a scientific story. Like we’re in the middle of a path to discovery than we are at the end.
So I think I, at least in my work and what we’re trying to do on our Unexplainable podcast is really reorient the stories around the questions, and hopefully telling people about that scientific process where science exists along the timeline, like can help people understand like, oh, of course, we don’t have the perfect answer right now. Because the perfect answer is really hard to get at, and we have imperfect tools to get at it.
JOHN DANKOSKY: This idea of a podcast just exploring what scientists don’t know, it’s something different than usually what science journalists do. We’re always interested in what people do know. That’s the big exciting stuff.
BRIAN REZNIK: Yeah, I have to say, like Julia, the Loss of Confidence Project really was a huge inspiration for me in helping to develop the show, and like when I was looking at the replication crisis in psychology, I was thinking, wow, when the answers fail, like when scientists say, oh, don’t have confidence anymore, the questions remain. There’s just so much more unknown than known.
And like the world is still haunted with mystery. So it’s about taking this problem of like realizing that we’re wrong or realizing that we don’t know something, and make it exciting. Like realizing that the journey to getting– to just understanding what we don’t know can be thrilling sometimes in itself.
JOHN DANKOSKY: Brian Reznik is senior science reporter and co-creator of VOC’s Unexplainable podcast. Brian, thanks so much.
BRIAN REZNIK: Thank you.
JOHN DANKOSKY: And thanks also to Julia Roher, lecturer at the Department of Psychology, University of Leipzig, who organized the Loss of Confidence Project. Thank you so much Julia.
JULIA STRAND: Thank you, John.
JOHN DANKOSKY: And the other Julia that you heard at the top was Julia Strand, associate professor of psychology at Carleton College. You can hear more of her story on our Science Friday podcast.
Elah Feder is the former senior producer for podcasts at Science Friday. She produces the Science Diction podcast, and co-hosted and produced the Undiscovered podcast.
John Dankosky works with the radio team to create our weekly show, and is helping to build our State of Science Reporting Network. He’s also been a long-time guest host on Science Friday. He and his wife have four cats, thousands of bees, and a yoga studio in the sleepy Northwest hills of Connecticut.