Bringing Rigor Back To Health Research
What’s holding up the quests to cure health scourges like cancer and diabetes? Some biomedical researchers will say it’s their own research practices.
Last week, the National Academies of Sciences, Engineering, and Medicine published an update to a 25-year-old report on research integrity. While in 1992 the report addressed “bad apple” researchers, the 2017 edition calls out a rise in systemic, detrimental practices that can invalidate results, including poor study design, cutting corners, and lack of data- and methodology-sharing, which inhibits other researchers from attempting to replicate the research.
Meanwhile, the Trump administration has proposed cutting $5.8 billion from the budget for the National Institutes of Health.
Richard Harris is an NPR science journalist and author of Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions (Basic Books, 2017). He explains how competition for funding and a pressure to publish breakthrough research at a rapid pace are undermining advances in treating cancer, ALS, and other diseases. He also highlights opportunities for a system overhaul.
Richard Harris is the author of Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions (Basic Books, 2017). He’s a science reporter at NPR in Washington, D.C..
IRA FLATOW: This is Science Friday. I’m Ira Flatow.
As the Trump administration proposes cutting as much as $5.8 billion from the NIH, as scientists march on Washington to defend their field, you might think it’s the worst time for a thoroughly big airing of systemic deficiencies in biomedical research. Well, this is going to be bad time to talk about that, but no, NPR science reporter, science correspondent Richard Harris is airing them anyway. He has a new book out, a terrific new book, about the stories of labs cutting corners, clinical trials based on bad preclinical research, contaminated cell lines, and a consistent failure to find new advances in curing cancer and other major diseases. And with papers in recent years estimating for example, that as many as 20% of studies may have untrustworthy designs, wasting as much as $28 billion per year.
So what researchers and their institutions be doing differently? Can we criticize science as it’s practiced, and still advocate for funding it better? Richard Harris, NPR science correspondent, author of the new book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. We also have an excerpt of the book at sciencefriday.com/cures. Our number 844-724-8255.
Always good to see you, Richard.
RICHARD HARRIS: Nice to see you, Ira.
IRA FLATOW: We’ve known each other like 35, 40 years, something like–
RICHARD HARRIS: It seems forever, yeah.
IRA FLATOW: Well, guys, this is a great treaty. This is a great book. I mean it’s eye opening. And you don’t mince any words, in this book, about criticizing scientific research. Was it as surprising to you for the years you’ve covered science?
RICHARD HARRIS: It was somewhat eye opening, absolutely. And I must say that it does have a very gloomy title, and you do raise some very fair questions about whether the timing of this is ideal. But it’s not an all bad news book, actually. There’s a lot of things that I point out in the book that can be remedied and should be remedied. I think, you know, we fund a lot of research, we taxpayers. And it’s a completely valid question to say, are we getting our money’s worth? And that was– I went into it with an open mind. I said, sometimes we are. Sometimes we aren’t. Here’s how we can do better.
IRA FLATOW: You talk about, I think, what we as science reporters, we as medical reporters do every week we open up the journals. We find a mouse study. We talk about the mouse study. We’re always careful to say, it’s a mouse study. It may not apply to humans. But that doesn’t let the people really know the truth of where that mouse study is eventually headed.
RICHARD HARRIS: That’s true. And a lot of these mouse studies are headed absolutely nowhere. One of the stories that I tell in the book is about the search for treatment for glioblastoma, a terrible brain cancer. And I was in somebody’s office, Anna Barker, who’s at Arizona State University, what used to be Deputy Director of the National Cancer Institute. And she had a poster on the wall, and I said, the print is so fine on that poster. What is that? And she said, it’s a list of every single study of glioblastoma. There are 200 study names on that list, and every single one of those studies failed. And many of the studies started with hopeful, intriguing ideas from mice that simply didn’t translate into human health.
IRA FLATOW: But we won’t hear about those failures, will we?
RICHARD HARRIS: Well, no. But by and large, we may report those first initial, oh here’s something that seems to show promise in mice, and then it sort of fades away from there.
IRA FLATOW: Are we talking about misconduct in the researchers and in many of these cases?
RICHARD HARRIS: Very rarely. As far as anyone can tell is there outright fraud and misconduct going on here. What’s much more common is that the system that is set up right now, there’s such a fierce race for money, and the amount of money that’s available is diminishing all the time. It puts incredible pressure on scientists, and they can’t afford to have anything less than perfect, or they’re worried their lose their funding. So what happens is, this drives scientists to exaggerate what they’re finding, and to overstate, to sweeten, to maybe cut a few corners here, to try to get their science out the door as fast as possible, because they need that next funding grant. They need promotions. People who are young scientists wanting to get jobs, know that they have to have something really spectacular or they can’t move ahead.
So the incentives are really set up wrong. Nobody wants to spend time on studies that don’t pan out, but the incentives are all pointing in the wrong direction.
IRA FLATOW: And you know, you read again, reading the journals and reading articles about research, I mean the whole definition of research is search and then research, right? So that you do those experiments over again. But you point out in your book that many of these original experiments are never done again.
RICHARD HARRIS: Right, or people do them again and it doesn’t work, and they give up. And they never publish the result. You know, often a graduate student will be told, hey, here’s a hot paper. Why don’t you build on that finding? But of course, before you build on it, repeat the experiment. And they try, and try, and try to repeat the experiment, and they never can get it work. And so what do they do? They say, well, I could waste more time writing up how hard I tried to repeat this experiment, and I couldn’t make it happen, or they can say, boss, give me a new project. In other words, we never see or we don’t often enough see those papers that say, hey, there’s a problem here. We didn’t resolve it. Maybe somebody else can, but this does not seem to be working.
IRA FLATOW: You mentioned in your book that you were surprised yourself when you started researching and interviewing scientists. You thought you’d get push back from them, but they didn’t. They agreed with you.
RICHARD HARRIS: I was surprised that to the extent that people embrace this as a serious problem and were happy to talk to me. And I’m not just talking about the low level people in the labs who are grousing about how bad their bosses are. I would talk to the head of the NIH. I talked to the–
IRA FLATOW: And he agreed there was a problem, right?
RICHARD HARRIS: Absolutely. And the top person at the FDA who’s in charge of drug approval, had some of the spiciest language to offer describing what’s going on in academic research. And Nobel laureates and many members of the National Academy of Sciences and so on. I mean I talked to the top people in this field, and there was a lot of hand-wringing, a lot of concern, and a lot of genuine desire to figure out what was wrong so they can make it better, because as I say, nobody wants the state of affairs to be this way.
IRA FLATOW: It’s interesting because you made a distinction when you talked about academic research and corporate research. And you were saying, you know, the corporate research seems many cases to be better, because they’re not throwing more money away, because they are a corporation and what they’re doing.
RICHARD HARRIS: Right. There’s two different incentives in corporate research, one of which is, absolutely if it doesn’t work, you’re not going to make money off of it. So there’s an incentive to cut your losses and move on. The other thing is, if you’re an academic researcher and you have an idea that you’re pursuing, if it turns out it’s wrong, it may be very difficult to get money to pursue a new idea. So you’re maybe going to stick with an idea that you, in your heart of hearts, knows is not really going anywhere. In the corporate world, they say, you know, you’re a good scientist. We have a different project for you. Take this one on instead. So I think both of those dynamics are at play.
IRA FLATOW: Our number 844-724-8255. Talking with Richard Harris about the very aptly named book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. And That’s out by Basic Books. And you know it looks like the National Academy of Science– Academies of Science agree with you. They have a new report out, just last week, describing what they called detrimental research practices and advocating higher standards. Most scientists in bio medicine obviously acknowledge this problem as you say. So what is the overarching answer to this? There’s no thumbnail answer to this.
RICHARD HARRIS: There’s no thumbnail answer, but there are a couple of things that can happen, and some can happen quite quickly and quite easily. One thing is that scientists often use poor ingredients. And they can do a lot better job checking their ingredients. In fact, there are new laws requiring that they do a better job of checking to make sure that their cell lines, for example, are what they think they are.
IRA FLATOW: That is the amazing part of the book. In that part of the book, you say, you know they’ve been experimenting with what they thought were cancer cells, or one kind of cancer, turns out it’s another kind of cancer.
RICHARD HARRIS: Yeah.
IRA FLATOW: I couldn’t believe that.
RICHARD HARRIS: There are hundreds and hundreds of cell lines that are misidentified. It is quite remarkable. So that’s one thing. You can fix that problem easily. Another thing you can do is you can increase the amount of transparency that’s in science. And you can tell scientists, make all of your data available. Make your analytical code, your computer code available so other people can run the code on your data and see if they can get the same answer, and share your ingredients.
IRA FLATOW: They will do that?
RICHARD HARRIS: Obviously, there are some disincentives for doing that, because people are worried about the competitive nature of that, but that is often required in various guises either by journals or by funding agencies. Some scientists ignore that anyway. But the more you can make that happen, the quicker you can close the loop and say, is this or is this a valid result or not? The other actually powerful thing about that is if a scientist says, I know I’m going to have to put all this out there, I’m going to check it one more time, because I’d rather find my own mistakes than have somebody else find it. So I think it’s not universally done by any means, but I think the more that we can improve transparency in science, the more that, the better off we’ll be.
IRA FLATOW: You know scientists always are trying to get into the two or three top journals, Science, Nature, whatever. What is the role of the journals in ensuring that this is a good bit of research here?
RICHARD HARRIS: Well actually, they incentivize bad research in a way, because if you know that your promotion is based on getting your paper in Cell, Nature or Science. You have to make sure it is totally squeaky clean as possible. So if you find some data that doesn’t agree, ah maybe I’ll leave that out of my paper, because then that may make the editor less likely to accept it. And so this actually turns out to be a pretty serious disincentive for scientists to be totally honest about things. And some of these, there are words that have increased dramatically in the scientific literature in the last 30 years or so. It is extraordinary unprecedented. And those are increasing in dramatic fashion in the literature, because people want to say, what I have found is so spectacular that I deserve a new grant, and maybe a promotion, or both. So–
IRA FLATOW: Or a prize.
RICHARD HARRIS: Or a prize. So the journals actually– it actually leaves the journals, some journals uncomfortable, because actually, if you’re a dean and you’re trying to decide who to hire, you’ll say, well who’s published in the top tier journals? And the journals say, we just pick things that we think are interesting. We’re not– we don’t really mean to be the surrogates for the deans at these schools to decide who’s the top scientist. Just because you have a paper in a top journal, doesn’t necessarily mean it’s a top result, but you know it’s part of the topsy turvy system that is evolved.
IRA FLATOW: Speaking of topsy turvy, let’s go to our listeners. Let’s take a call. Let’s go to Jennifer, in Houston. Hi, Jennifer!
JENNIFER: Hi there, thanks. I was wondering if anybody that you’ve been talking to during this project mentioned the fact that sometimes the result you publish is not the initial result that you were looking for at the time of a grant proposal and that sort of thing. Could it be that any of these poorly designed experiments, look poorly designed, ex post facto because they were not in fact the initial [? results ?] of the research program? They’re in a sense left over results.
RICHARD HARRIS: That’s an excellent question. And one thing that does happen a lot is people run an experiment, they don’t get the result that they expected, and then they say, well I’m going to come up with a new hypothesis. This is called harking or hypothesizing, after the results are known. And this is a big no-no. It seems commonsensical. Well, if I found an exciting, statistically significant result, you know, what the heck, I’m going to go with it. But it turns out that those are often, those are highly likely to be false leads. And scientists are welcome to make those discoveries and observations, but they shouldn’t publish them as discoveries. They should say, here’s a hypothesis generating discovery. I’m not going to go do a study that’s designed to actually look for this effect, and see if I can find it in an experiment that is actually designed to look for it. It’s a very common and troubling cause of false leads in the scientific literature.
IRA FLATOW: I’m Ira Flatow. This is Science Friday from PRI, Public Radio International, talking with NPR science correspondent Richard Harris about his new book Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions and it’s a fantastic book. It talks about things you, I’ll tell you, you know, you would not have thought of like, how bad at statistics scientists can be. And they’re all trying to get that 95% confidence level.
RICHARD HARRIS: Right. Yeah. That p value of 0.5– 0.0– yeah, 0.05. And a lot of people think that means that that their result is 95% chance to be correct, and it’s completely not that. It’s a little hard to understand. And I mean I’m a non-scientist. I never really took enough statistics to do this, but I reviewed this chapter with several really capable bio statisticians to make sure I was on the right track here. But it is easy for scientists to fool themselves in thinking this is a significant result. This is past this magic threshold, therefore, I can publish and It must be real. And very, very often even something that reaches that 95% threshold is just wrong. Statistically speaking it’s a fluke.
IRA FLATOW: What was your turning point? I mean what got you on to writing this book? Was there something, one particular case you were following? And said, oh, my goodness, if this is bad then who knows what the rest of research is like?
RICHARD HARRIS: There were a number of instances like that. I think the first thing was, I was looking at the NIH budget. I came back to reporting on these issues in 2014, and I thought, what’s been going on? And I discovered the NIH budget had doubled between 1998 and 2003. And then it had flattened out, and in real dollars it had actually decreased by 20%. And I thought– and I also discovered that there was a 50% increase in the amount of laboratory space in this country, to do this kind of research. So I figured, wow, this has got to have some sort of an effect, to double the budget, bring in all these new people, open all these new labs, and then gradually crank down the amount of money that was available. I figured this is going to have some negative consequences. So that was one really big thing. The other paper that really caught my attention was a paper published in 2012 by a researcher named Glenn Begley, who was head of cancer research at the drug company Amgen. And he tried to replicate 53 exciting studies that had been done in academia, and he thought, wow, if any of these work out, these are really great leads for drugs. And of those 53, he could only replicate six, even with help from many of the original researchers. So that was–
IRA FLATOW: That flag went right up.
RICHARD HARRIS: That flag was– I was not the only one who noticed that flag, but I think, I thought, wow, there’s something here that’s worth digging into.
IRA FLATOW: You talked about the budget cuts or the decrease in funding for NIH. Now we’re going to see even more budget cuts from this new administration or are we going to see a further degradation of research.
RICHARD HARRIS: It’s hard to know what’s going to happen right now in Washington. But clearly, it’s the worst thing you can do if you’re in a situation where people are starved for money, and the driving force of what’s going on here is that people are fighting harder and harder for less and less money. The solution is not to say, OK, we’ll put less money in the pot. I mean, that’s only going to make the problem worse.
IRA FLATOW: But you said at the beginning, that you don’t want everybody to come away from the book thinking the sky is falling. That there are fixes for these things.
RICHARD HARRIS: Absolutely. And one thing I did spend a lot of time while researching the book doing, was tracking down people who’ve been thinking about these problems and thinking about ways to solve them. And I encountered many people with great ideas. I already mentioned the idea of transparency, but there are a bunch of other ideas that people have for rethinking the culture of science, for just improving the general situation, for getting people aware of this problem, that’s a big factor, and also for improving education, because it starts really in that scientists don’t get the kind of training they need. They get on the job training, they learn how to do the technical skills, but they don’t necessarily learn how to think more deeply about how to design experiments, how to either run the right statistics, or find a bio statistician to help them, or those sorts of things.
IRA FLATOW: Well, we’re all glad you went back through medical reporting 2014, because this is a great book.
RICHARD HARRIS: Well, thank you.
IRA FLATOW: I mean, I’ve read a lot of what you’ve done, and this I think, is the best stuff so far. It’s Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions is an eye opening book by Richard Harris. Thank you, Richard.
RICHARD HARRIS: My pleasure, Ira.
IRA FLATOW: You’re welcome. That’s about all the time we have. B.J. Leiderman composed our theme music. Thanks for production partners at the studios of the City University of New York, and the folks here at NPR for helping us get our show on the air this week. If you missed any part of the program and you’d like to hear it again, you can subscribe to our podcast or Science Friday is every day these days, because we’re on our web site at sciencefriday.com, or smart phones, or tablets, and also you can join our mailing list, we’re on Facebook, and we’re tweeting all week. Hope to see you at the March for Science tomorrow. I’m Ira Flatow, in Washington.