‘What If I Didn’t Tell?’
A couple of years ago, Julia Strand was trying and failing to replicate a study she’d published. At the time, she was an assistant professor without tenure, and the original study had been one of her proudest accomplishments.
But when she and her co-authors tried to replicate it, they got the opposite results. Then one night, Julia discovered why. In her original code, she’d made a tiny but critical error, and now, with her reputation and job on the line, she was going to have to tell the world about it.
This piece is part of our larger conversation this week about self-correction in science and intellectual humility. Listen to the full segment.
JOHN DANKOSKY: Hi, it’s John. Earlier this segment, you heard a bit from Julia Strand, who discovered a tiny but critical error in her own published study a couple of years ago. And she had to decide, was she going to admit this or pretend it never happened? So just for you podcast listeners, here’s Julia’s story.
JULIA STRAND: First, I saw that little thing, and I thought, oh, wait, is that what did it? No, no, don’t panic yet, Julia. That might not be it. And I tested a couple of other things, and then it was time to panic.
I’m Julia Strand. I’m an associate professor of psychology at Carleton College. So my research is on how people understand spoken language.
So you know if you are in a crowded noisy space and a lot of people are talking, it’s harder to understand what the person is saying not just because you can’t understand the words that they’re saying but also you feel like you’re kind of like squinting your ears in order to understand the message. So that feeling of squinting your ears and your brain is called listening effort.
And so we’re interested in ways to reduce listening effort for people. In 2018, I published a paper reporting this very cool finding that we could dramatically, substantially reduce listening effort by presenting a modulating circle on the screen that changed in size and color as speech gets louder and quieter. So when someone is talking loudly, you see the shape get bigger.
So the way that we measure listening effort in the lab is to ask people to do some kind of secondary task while they’re listening to speech. So the task they were doing is they had to push a button as quickly as they could every time they heard a noun. The idea is, if you have to expend more effort listening to the speech, there are fewer resources available to push that button, and so it slows people down. So in 2018, we published this paper showing that presenting this circle makes people expend less listening effort.
And one of the things that was really remarkable about this study is that the effects were really big. In fact, every single participant who is in the study showed the effect. And it also made us really excited about potential clinical applications, right? This is something where, if you’re on a phone call and you can’t see the person’s face, if you have a little visual stimulus, that that could actually make the task of listening easier. So we could make an app that lets you see the speech as well as hear it.
So we were really excited about this paper. It got published in a great journal, and we presented it at conferences. People were very excited about it. I, in part, based on that work, wrote and was awarded my first major grant, a grant from the National Institutes of Health. And so it was really– it was a very exciting finding.
So about a year later, we were trying to replicate and extend the effect. And so we were very surprised when we found that, under really similar circumstances, the finding did not hold up. In fact, the finding was reversed. And it wasn’t just like the effect got a little bit smaller. We went from 100% of the participants showing the effect to most of them not showing it, to most of them showing the opposite effect. And we were, of course, really puzzled and a little worried. But at that point, I thought there must just be some kind of bug with the new study because, the old study, the results were so clear and so convincing.
So what my research students and I went through, tried to think of every possible thing that could have caused this difference, like we’d updated the operating system of the computer. Could that have done it? We were using slightly different words. Could that have done it? But all of those things were really minor changes and shouldn’t have broken the results.
And then, one night, I had– I was working at home. I had put my kids to bed, sat down at my laptop in my dark living room, and was just going over the program. And then I found it. I realized that in the original study I had made just a tiny little programming error, but it was entirely responsible for the effect that we saw.
So the nature of the error was that I had inadvertently set the timing clock to start measuring the participant’s response time before the words were presented in the condition that didn’t have the circle. It’s basically like, I started the stopwatch before the runners even got to the starting line. So what that means is that it looks like the condition with the circle was super fast, but it’s actually just because I was giving it a head start.
So when I realized what had happened– first, first, I saw that little thing, and I thought, oh, wait, is that what did it? No, no, don’t panic yet, Julia. That might not be it. And I tested a couple of other things, and then it was time to panic.
Yeah, I had made this mistake. So the bottom dropped out of my stomach. I started crying, and I just started realizing all of the consequences that this was going to have. So I would’ve– so if bringing this to light would mean telling my research students, telling my co-authors, telling the chair of my tenure committee and the dean of the college because I was currently under review for tenure– my committee was meeting that month to decide whether I should have a job for the rest of my life or get fired.
And so all of this is spinning through my head. And I am, just to remind you of what’s happening in this situation, sitting in the dark in my living room alone, and nobody else knows. And so to be totally honest, I also gave a couple of minutes of thought to, what if I didn’t tell? What if I, to my research students, maybe I could say, boy, this is weird. I guess that modulating circle isn’t very helpful. Let’s just study something else.
And not try to retract the original paper, not jeopardize tenure. I had also just gotten this grant. I had had my grant for like a month. And so I was like, I don’t know. Are they going to make me give the grant money back? I had no idea what was going to happen.
And so, yeah, I had some dark moments of, maybe I don’t tell. As you can see, that is not the choice that I made. I, that night, drafted a list of everything I was going to have to do, everybody I was going to have to tell. And then I laid in bed, and didn’t sleep, and then got up in the morning, and went about just calling and emailing so many people to say, guess what? I screwed up in a major way.
And it was incredibly hard. Making all those phone calls, writing all those emails was just– everyone was like this gut punch of like, yeah, I screwed up. I made a mistake. Yep, I screwed up. I made a mistake, over and over again.
And what was really striking about that day was both that it was terrible and also I was really surprised at how kind and supportive everyone I talked to was. My coauthors said, yeah, this is too bad. But at least we found it. At least we found it pretty soon after the paper was published.
The editor of the journal, the department chair, my dean, everyone basically said, this is really too bad. But you’re doing the right thing and good for you for doing the right thing. So because the mistake was systematic– it added the same amount of extra time to every single response– it was actually very easy to subtract that time off and arrive at what the answer should have been.
And so we were able to go to the journal and basically say, the paper that you have published in the journal is wrong. But we know what the right answer is. And so we went back and forth with the journal for months, trying to figure out what the right thing to do was. And I was very sure, when I initiated the process, that they would retract the paper because it was wrong.
It was so wrong that even the title was wrong. And after a lot of back and forth, the journal decided not to retract it but to put an error notice on the original paper and then publish a new version with the results updated. So in the end, I was awarded tenure. I told the National Institutes of Health, and they did not revoke the grant. In the end, all of the things that I had feared would happen didn’t end up happening.
But what I realized is that, in that moment where I was sitting in my living room, making this decision, I didn’t know what was going to happen because I had never heard a comparable story. You hear about people retracting papers because they engaged in scientific misconduct. And I’ve heard about people catching problems in their work that happen that they catch before they’re published. And I’ve heard about people catching errors in other people’s work.
But I had never heard of a story of someone finding their own mistake and coming forward with it. And so because of that, I felt kind of compelled to share my story so that somebody else, when they’re sitting in their living room, alone in the dark, would know that, at least in my case, it actually worked out much better than I would have expected.
If we want people to be willing to do what is right for the science, even when it goes against all of the personal incentives, it seems like we need to change scientific culture in a way where making mistakes is not seen as a mark of shame, in which we understand that like mistakes happen and we just need to find ways to reduce the likelihood that they happen but that it isn’t a failure of someone as a person to have made a mistake. Finding these errors and not talking about them, although it may be better in the short term for a given individual, it’s terrible for science.
And so we need to be willing to talk about our mistakes so that we can figure out how to make fewer of them but also so that we can better trust the scientific record. So it has been a hard process, but I wouldn’t have done it any differently.
JOHN DANKOSKY: Today, Julia Strand is an associate professor of psychology at Carleton College. This segment was produced by Elah Feder, and the music is composed by Daniel Peterschmidt.