When Eye-Grabbing Results Just Don’t Pan Out
You know the feeling — you see a headline in the paper or get an alert on your phone about a big scientific breakthrough that has the potential to really change things. But then, not much happens, or that news turns out to be much less significant than the headlines made it seem.
Journalists are partially to blame for this phenomenon. But another guilty culprit is the scientific journals, and the researchers who try to make their own work seem more significant than the data really supports in order to get published.
Armin Alaedini, an assistant professor of medical sciences at Columbia University Medical Center in New York, recently co-authored a commentary on this topic published in The American Journal of Medicine. He joins Ira and Ivan Oransky — co-founder of Retraction Watch and a medical journalism professor and Distinguished Writer In Residence at New York University — to talk about the tangled world of scientific publishing and the factors that drive inflated claims in publications.
Armin Alaedini is an assistant professor of medical sciences at Columbia University Medical Center in New York, New York.
Ivan Oransky is co-founder of Retraction Watch, Editor in Chief of Spectrum, and a Distinguished Writer In Residence at New York University, where he teaches medical journalism.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. You know the feeling– you see the headline in the paper or get the alert on your phone about a big scientific breakthrough that has the potential to really change things, and then not much happens, or that news turns out to be much less significant than the headlines made it seem. Well, part of the blame lies with the journalists and headline writers, but part may also lie with the scientific journals and the researchers publishing in them trying to make their own work seem more significant than the data really supports.
Joining me now to talk about that are my two guests, Armin Alaedini, assistant professor of Medical Sciences at Columbia University Medical Center in New York and coauthor of a commentary on this topic, published in the American Journal of Medicine, and Ivan Oransky, cofounder of Retraction Watch. He’s also distinguished writer in residence at NYU, where he teaches Medical Journalism. Welcome, both of you, to Science Friday.
ARMIN ALAEDINI: Hi, Ira, Thanks for having me.
IVAN ORANSKY: Thanks, Ira. Good to be here.
IRA FLATOW: You’re welcome. All right, Armin, you recently wrote about what you see as this troublesome trend. Can you tell us why? Elucidate a little bit more.
ARMIN ALAEDINI: Well, as scientists, the primary method of communicating what we find in our research is really through publishing our works in scientific journals. Now, generally speaking, scientists are trained to approach their observations with a lot of skepticism. And we try to really avoid any unwarranted claims that may not be based on our data.
At the same time, when we communicate our findings of these papers as authors of the scientific articles, we, naturally, try to emphasize the significance and impact of the work that we’ve done. And we try to interpret them in certain ways to make it most compelling. But there is a balance here. And, unfortunately, the balance between this compelling presentation on the one hand and the avoidance of hyperbole on the other, we think, might be shifting.
And this is something that I and my colleagues have been especially seeing in the form of these very provocative publication titles that seem to be written to basically imply major breakthroughs and transformative or paradigm-shifting findings. When we look at this data, of course, you start digging in a little bit more carefully, often we see that those claims are not quite what they are supposed to be. And they’re not always supported by the data that’s in those papers.
IRA FLATOW: Can you give me an idea of the kind of overstatement so our listeners have an idea of what you mean? And it may not be an exact paper, but the kind of research that gets published.
ARMIN ALAEDINI: Yeah. So, for example, there may be a statement to the effect that “sex plays an important role in the persistence of symptoms in long COVID,” or that “patients with long COVID actually have the virus still in their bodies.” These are two examples that I specifically focused on in the commentary that I wrote because they were in actual articles that were published. And other people, the analyses of those data, looked at those data. And it was clear that the claims did not perfectly match the data that was in those papers.
IRA FLATOW: Hmm. Now, Ivan, you’ve been watching the publishing field for a long time now. Is this a new phenomenon?
IVAN ORANSKY: Ira, there’s nothing new under the sun. And this is an example. Now, that doesn’t mean we shouldn’t be very concerned about this. So I share the concern here about what’s been going on.
But this is a longstanding problem. This certainly predates my examination of the scientific literature for issues and hype and bias. I can remember when I was at Reuters Health, when I left there 10 years ago now, some of the stories that I actually wrote a few, but my staff wrote even more, about a lot of hype when it came to chemotherapy.
And so the kinds of things you would see maybe aren’t quite as stark as what we’re discussing today, but they really overemphasized benefit. They picked markers– in other words, they picked signs of progress that were maybe misleading or you might even say cherry picked. And they would omit side effects, or they would downplay side effects.
Those are all the kinds of hypes that you see that I think they’re endemic. And I think they’ve been endemic for a long time. And a lot of it, I think, has to do with the incentive structure that all scientists are working under. And I think that when we ignore the incentive structure, we’re going to make the same mistakes over and over.
Publish or perish is a real phenomenon. And in order to get published, you generally need to say something pretty important, pretty, quote unquote, “earth shattering” or “groundbreaking.” Those are all these terms people use. And, of course, Ira, I think they’re appropriate when you’re talking to geologists or a seismologist but maybe not so much when you’re talking to someone who’s looking at cancer or something like that.
And it tends to happen in grant applications. It’s how people get funding. It certainly happens in publications.
And it does happen– let’s take some of the blame, if you will, Ira, amongst ourselves as journalists. When we try and get the attention of readers, listeners, viewers, it’s noteworthy that those stories tend to have a false binary of “this is absolutely wonderful” do much better than the ones that have a lot of nuance.
I did want to note one thing, if you’ll allow. I think it’s also important to be global about this issue, so, in other words, not just look at specific examples that one particular group may have issues with. They think that a particular paper was hyped, and so they might want to point that out. And I think that’s very important. But I think that it’s also important to keep in mind everyone’s own biases and to platform the fact that this is a global issue and, again, I would say an endemic issue.
IRA FLATOW: Armin, if Ivan has seen this going on for so many years that he’s been following it, what did you find troubling? Is there a new trend? Is it being done more often? Is it more blatant?
ARMIN ALAEDINI: I think I’m seeing more of it, especially during the pandemic, with this explosion of research articles, studies on COVID. There were a lot of claims being made in many places, including in research articles, that did not necessarily follow all of the data. So during the pandemic, I think I saw more of it.
But in general in the past– I’ve been doing research for over 20 years. And I see more and more of this hyped up titles. And perhaps with the increase in how people get their news online, where we click on things– if you go on CNN, the most exciting titles get clicked the most. I think with this, we’re seeing even more and more of these hyped titles.
IRA FLATOW: Is it the journals that are trying to attract this?
ARMIN ALAEDINI: I think the journals are not stopping it, but it’s, I think, primarily initially coming from the researchers themselves because, as Ivan pointed out as well, it’s the incentive structure, it’s the structure of that. It’s, basically, for scientists, publishing the most prestigious journals is related to how they get promoted, how they get funded for their research. And, increasingly, they get more publicity in the media. So these are all incentives, and they’re all related.
IRA FLATOW: Yeah, so you have these journals publishing them. And then you have the click-driven media picks up on this and amplifies them without digging any deeper because that’s what they do.
IVAN ORANSKY: Well, Ira, if I may, there’s an interstitial there, which is press releases.
IRA FLATOW: Aha.
IVAN ORANSKY: And for years, actually, people have studied this, too. I want to be mindful. This is not my idea at all. People have looked at if it’s an error or hype that appears in a press release, how effortlessly and without any real questioning that shows up in media coverage. And the studies I’m thinking of happened even before the explosion of social media, where things get even more soundbitey, if you will.
And so that’s actually quite depressing as a journalist. I think you’d agree, Ira, that things are just passed along. Nobody’s really doing any deep dives or even questioning even in the moment of analysis. But that sort of thing, I think we have to look at every stage.
I absolutely agree with Armin that it’s starting with the researchers. It’s in the journals. I would probably be less generous than what Armin said in terms of journals not stopping it A. Lot of them are writing the press releases that are problematic. So are universities because they all need to get what are known as impact factors. They need to be cited more often. These are terms of art.
For example, during the pandemic, to speak to Armin’s comments about what might have been happening over the past few years, two very major journals, actually, they were in an arms race, which one of them won for the moment, where one of them more than doubled their impact factor, which is a very flawed metric of how often papers are cited, and because they were publishing so many of these really splashy COVID-19 papers, some of which, by the way, were retracted, which is where we started thinking about them and writing about them. But they were publishing all these papers that people had to cite because everyone was reading them and needed to be up on things, et cetera. And so every player in this, including, again, journalists and journals and universities and researchers, even funding agencies, everyone is playing this game where they need more attention. And one of the key ways to get more attention is to, frankly, overplay and hype your findings.
IRA FLATOW: Yeah, that’s interesting, Armin. You talked about the big-name journals being drawn to the splashy findings. Do you find that, then, on the other hand, the lesser-known journals are doing better at this?
ARMIN ALAEDINI: Well, some are. I think, generally speaking, the society journals that have a more narrow focus, generally speaking, they publish less of those articles. But this is becoming a problem that’s affecting the entire business, especially with this of open-access publishing. We are seeing more and more of that. But, in general, I would say we see less of it with the lower-tier, more focused, specialized journals.
IRA FLATOW: And also let me talk about specialized journalists, Ivan, for a second because you teach medical journalism. There are not as many medical journalists around on the major media platforms. They don’t hire medical specialized science reporters, do they?
IVAN ORANSKY: Yeah, I think, again, there’s always nuance. But I think, generally speaking, you’re correct. And it actually parallels what Armin was just saying, what you were asking, Ira, about the specialized journals, right? There’s actually been what I consider a fairly significant growth in specialized news outlets, whether they’re trade publications– in other words, for professionals in the space, which I’ve worked in a lot of those for a number of years– or for a public that is more interested in science maybe than average, for lack of a better term. I think there’s been a fair amount of growth there, although some contraction as time goes on.
But that’s where a lot of, for example, to be fair, my students and students at other programs who are very specialized– I’m obviously quite biased here, and I think they’re very talented and well-trained– but that’s where a lot of them gravitate because they want to have more of an impact, they feel, and having richer discussions that can include nuance. And Armin was mentioning CNN earlier, et cetera, large news outlets that you can’t necessarily have those kinds of nuanced discussions because you have to compete with whatever the political or other big stories are that day. I do think that it’s something that has been tracked pretty well. And the large publications have just not kept up in terms of hiring those folks.
IRA FLATOW: This is Science Friday from WNYC Studios. Can you offer a solution to this, Armin?
ARMIN ALAEDINI: Well, I think, as Ivan alluded to, we have to deal with the incentive structure. And we can think of some short-term and long-term approaches. In the short term, universities really need to be rethinking how they reward their scientists. Sure, they want their scientists to publish in the best journals and to bring in as much money as possible to the University to do their research. But, really, to do good science, publishing in those prestigious journals, that may not be the best policy for universities or funding agencies to advance science.
The other thing is, of course, this peer review of manuscripts. Everything that we publish is supposedly peer reviewed, right? So we have other experts, our peers, reviewing this and determining whether this should be published in its current format or not be published at all, et cetera.
That peer review process needs to be changed and needs to be done more diligently. Reviewers that are involved need to have the proper expertise to review these papers. The editor who oversees this needs to also be familiar with the field.
And also, we talked about journalism and having expert journalists. In the long term, though, I think the solution really has to come from a change in how we teach our students, the culture of how we emphasize certain things in the way we teach PhD students and MD students. Scientific rigor is extremely important, how data should be analyzed and how data should be interpreted. I can tell you that during my PhD, I was not taught that. And I learned it along the way.
But I think we can change the way that we teach research to students. And we need to prioritize these certain values of how we analyze data and how we communicate them. That should become part of the graduate program, in my opinion.
IRA FLATOW: Ivan, solutions? Any thoughts on that?
IVAN ORANSKY: No, I essentially agree with Armin in terms of where the various areas for improvement are. I would push on the peer review process. And not to disagree with anything that Armin has said about rigor and improving it, but I would even take a step back and say, let’s be more honest about what peer review really can and can’t do.
I think we have been, if you’ll forgive me, sold a bill of goods by people who have a vested interest in convincing us that or having us believe that peer review is a Good Housekeeping seal of approval. There’s a false binary of it’s peer reviewed or it’s not peer reviewed. That has been, actually, I think upended recently in what I would say is a good way by what are known as preprints, papers, manuscripts that are posted online that are not peer reviewed but that clearly say they’re not peer reviewed. And if that sort of nuance and that sort of context is provided to other scientists, of course, and as well as to readers and listeners and viewers, I think we’d have a much better understanding of how science really works, of how peer review works and doesn’t work.
During the pandemic, journals were so desperate to publish papers about COVID-19 because they wanted to get cited more often. They actually asked me to peer review five different papers about COVID-19. Now, you may think I know something about retractions, maybe about scientific publishing, a smattering of other things. I really don’t know anything about COVID-19, other than what– I’ve tried to keep up, as others have, with the literature. And yet that’s what happened.
And it also reveals a real problem, which is that there just aren’t enough qualified peer reviewers to do the kind of job we’ve been expected to think that peer review does. And I think being honest about that is a really good start to having more trust in the process rather than in any particular result, which is what we’ve, unfortunately, gotten to as a result of everything we’ve talked about today.
IRA FLATOW: Wow. Really interesting point. We could start another whole program on that. We’ll have to save that for another time, Ivan, and have you and Armin come back and talk about because we have run out of time.
Thanks to both of my guests, Ivan Oransky, cofounder of Retraction Watch, Armin Alaedini, assistant professor of Medical Sciences at Columbia University Medical Center in New York. Thank you, both of you, for being with us today.
IVAN ORANSKY: Thank you, Ira.
ARMIN ALAEDINI: Thank you, Ira.