02/10/2017

Should A.I. Have a Role in Science Publishing?

4:30 minutes

The myth of scientific research is that it involves an exacting, methodical approach to uncovering the truth. But in reality, science is a lot messier and fraught with human error—and that messiness extends to scientific publishing.

When researchers submit their work to academic journals, it goes through a peer-review process, which is supposed to enhance and strengthen the work through critical feedback from experts in the field. But that process can be easily compromised by overworked researchers asked to volunteer their time, conflicts of interest from competing labs, and even fraudulent data that human reviewers might miss.

But what if your reviewer were an intelligent machine? For several years now the world of academic publishing has been flirting with the idea of using artificial intelligence systems as part of the peer-review process. For example, A.I. would solve the problem of reviewer bias and could detect fraudulent data. But others argue that allowing an A.I. system to be the gatekeeper of new scientific knowledge is a dangerous step to take. Adam Marcus, co-founder of the blog Retraction Watch, joins us to discuss the role that A.I. could have in scientific publishing.

Segment Guests

Adam Marcus

Adam Marcus is the co-founder of Retraction Watch, and Managing Editor of Gastroenterology and Endoscopy News. He’s based in New York, New York.

Segment Transcript

IRA FLATOW: Now it’s time to play Good Thing, Bad Thing. Because every story has a flip-side.

Now we often discuss on this program how automation, robots are taking over our jobs. It’s a growing concern in many sectors, including science publishing. Publishers are increasingly experimenting with artificial intelligence to review scientific papers. But how much do we want algorithms evaluating our research instead of people doing that, determining what gets published and what doesn’t. With me to discuss the good and bad of AI assisted peer review is Adam Marcus, co-founder of the blog Retraction Watch, managing editor of Gastroenterology and Endoscopy News. Welcome to Science Friday.

ADAM MARCUS: Hi, Ira thanks so much for having me.

IRA FLATOW: You’re welcome. Let’s talk about peer review. Why is peer review so important?

ADAM MARCUS: Two words, Ira. Fake news. Peer review is science’s version of a filter for fake news. It’s the way the journals try to weed out studies that might not be methodologically sound or they might have results that could be explained by hypotheses other than what the researchers advanced. And it’s this system that they have of screening manuscripts before they get published.

IRA FLATOW: Now there has been a lot written about whether artificial intelligence will play a big role in this process. What could an algorithm do better than maybe a person can?

ADAM MARCUS: Well, the way we see it, algorithms can do nothing necessarily better except that they can do it faster and in bulk. So I can give you an example. There’s a system called statcheck, which was developed by researchers in the Netherlands, which is able to rapidly detect potential errors in statistical values. They can do, according to the researchers, in a nanosecond what a person might take 10 minutes to do. So obviously that could be very important for analyzing vast numbers of papers.

But the problem is that it catches signals and there’s a lot of noise in those signals. So what really you need is the human component to make sure that what is being detected on a first pass is actually a real red flag.

IRA FLATOW: So could there possibly be a hybrid of AI and people together?

ADAM MARCUS: Absolutely. In fact, I think that’s how the system probably should work going forward. Here’s another example. Many publishers, in fact every reputable publisher, should be using right now plagiarism detection software to analyze manuscripts that get submitted. At their most effective these identify passages in papers that have similarity with previously published passages. But it should be up to a human editor to figure out whether that’s in fact plagiarism or there’s a good reason for something to be overlapping.

IRA FLATOW: But if you use an algorithm is there not the possibility, and now this is sort of true of everything that has to do with computers these days, of the program being hacked?

ADAM MARCUS: Well, I’m not an expert in hacking. I’m sure that anything’s possible. But it’s also true that there are ways to hack the programs that we have now. You can be pretty sophisticated at writing papers so that you take a little bit from many papers and a plagiarism detection scan won’t detect plagiarism.

IRA FLATOW: So the best practice is, where do you think this will shake out? What will happen here, do you think?

ADAM MARCUS: I think I’d like to see that every manuscript that gets submitted be run through a plagiarism detection software system, a robust image detection software system, in other words, something that looks for duplicated images or fabricated images. And these things are in the works. In fact, there is a researcher at Stanford, Elizabeth Baca, who did a study recently which found, and this was I, not AI, it was just human intelligence. And she analyzed about 20,000 papers and found that about one in 25 had improperly duplicated images.

And then we have some sort of statcheck like program that looks for squishy data.

IRA FLATOW: OK. So you think all of that will be combined and we’ll automate this a little bit more, but still have the human element involved. Adam Marcus, thank you very much for taking the time to be with us today.

ADAM MARCUS: My pleasure.

IRA FLATOW: Adam Marcus, co-founder of the blog Retraction Watch, managing editor of Gastroenterology and Endoscopy News.

Copyright © 2017 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of ScienceFriday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producer

About Katie Feather

Katie Feather is a former SciFri producer and the proud mother of two cats, Charleigh and Sadie.

Explore More

How Do You Teach a Robot Right From Wrong? Story Time.

Stories are a great medium for communicating social values.

Read More