An AI Partnership May Improve Breast Cancer Screenings

9:18 minutes

Listen to this story and more on Science Friday’s podcast.

a doctor in a white coat with a stethescope around their neck, pictured from the neck down, holds a large black xray sheet depicting the results of a mammogram
Credit: Shutterstock

Reading a mammogram is a specialized skill, and one that takes a lot of training. Even expertly-trained radiologists may miss up to 20% of breast cancers present in mammograms, especially if a patient is younger or has larger, denser breasts. 

Researchers have been working since the advent of artificial intelligence to find ways to assist radiologists in making more accurate diagnoses. This July, a German research team, publishing in The Lancet Digital Health, found that when AI is used to help sort mammograms into low, uncertain, and high risk categories, a partnership between the radiologist and the algorithm leads to more accurate results.

To explain how this result may be translated into real clinical settings, Ira talks to Harvard’s Constance Lehman, a longtime researcher in the field of breast imaging. She talks about the promise of AI in breast cancer screening, its limitations, and the work ahead to ensure it actually serves patients.

Further Reading

  • Read the study, via The Lancet Digital Health.

Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.


Segment Guests

Constance Lehman

Dr. Constance Lehman is a professor of Radiology at Harvard Medical School, and a breast imaging specialist at Mass General Brigham in Boston, Massachusetts.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow. Accurately reading a mammogram is one of the more difficult tasks in radiology. And even a good radiologist can risk missing a patient’s cancer. In fact, up to 20% of exams might turn up false negatives depending on your age.

And as long as we’ve had artificial intelligence, researchers have been trying to bring it into health care, including identifying lesions in mammograms. A study published in The Lancet Digital Health in July tried another approach. What happens when a highly trained algorithm works with radiologists to identify which patients’ mammograms need a closer look? The team approach resulted in a 2.6% improvement over both radiologists working alone and the AI working alone.

Here to talk more about the possibilities and limitations is Dr. Connie Lehman. She’s a professor of radiology at Harvard Med School, a breast imaging specialist at Mass General Brigham, and a researcher who has been working on the integration between AI and mammography for many years. She was not involved in this new research. Welcome to the program, Constance.

CONNIE LEHMAN: Thanks for having me. Delighted to be here.

IRA FLATOW: I’m so eager to talk about this because it affects so many women, doesn’t it?

CONNIE LEHMAN: You know, it really does. While it’s imperfect, mammography is the best method we have to detect breast cancer early, when it can be cured. So we want to address the challenges and the problems with mammography. And AI holds great promise in that domain.

So we are very fortunate in breast imaging to have decades of research, believe it or not, in AI. People think it’s this very new thing, but computer-assisted detection and diagnosis tools were developed decades ago. And we had a lot of research in the 1990s and then in early 2000 to demonstrate what we identified and what we found when we had computers helping humans read mammograms better. So we learned a lot. Now, we have advanced tools in artificial intelligence, and in particular, much faster computers where we can do deep learning and neural networks, which really amplifies the potential impact of AI tools.

So a lot of the past work was having a computer flag areas on a mammogram that a radiologist should pay extra attention to. So it was really focused on lesions on the woman’s mammogram. And what we found out was there was human variation in interpreting mammograms when you didn’t have a computer helping you, and there was human variation in how radiologists interpreted mammograms when they did have a computer helping them. So it didn’t just solve the problem of a lesion that one radiologist might have missed, but the computer flagged it, so they diagnosed the cancer. It was more complex than that.

IRA FLATOW: So what did you see in this study that might change the conversation?

CONNIE LEHMAN: So this study said, let’s look at it a little bit differently. Rather than mark specific areas on the mammogram, let’s instead sort the mammograms. Using the computer, let’s sort the mammograms that need more attention and those mammograms that are highly likely to be negative, to not have a cancer.

So this is a domain that a lot of us are working on called triage. So you’re not going to mark the mammogram. You’re going to triage the case, saying, this needs some extra attention by the radiologists. We’re worried that there’s a cancer on this mammogram. And other mammograms are like, this looks totally clean. We think the likelihood there’s a cancer on this mammogram is next to zero.

So in this study, what they wanted to do was to say, well, we know that radiologists vary in how they listen to the computer that’s marking different lesions or different areas on the mammograms. How did the radiologists respond to these cues from the computer? This is such an exciting domain because it’s a different paradigm for how to use computers to help us do better at finding cancers on a mammogram.

So what did this group do? They decided to compare a single radiologist’s reading to a simulated situation where the radiologist would benefit from the AI score on the mammogram. So the first thing we need to know is the study design. This is a retrospective study.

So we’re going back in time. We’re pulling mammograms. We know how the radiologist interpreted the mammogram back in the past. We get AI scores from those mammograms, and we simulate a world where the radiologist would have used that AI score and adjusted their interpretation from that old recorded interpretation we had from the past. So it’s retrospective and a simulation.

Would this generalize to other populations, for example, outside of Germany? Because this was all conducted at screening centers in Germany. And really importantly, will this translate? Will this translate over into clinical practice?

So while it’s encouraging early results, the authors were also very careful to say, these are the limitations of this level of study that we performed. And I think it’s important for people to hear that because we’re all so excited about AI. And we can get ahead of ourselves a little bit on what we actually know and what’s promising.

IRA FLATOW: So it’s promising, but not ready for prime time yet. Is that what you’re saying?

CONNIE LEHMAN: 100%. Many, many investigators are giving us fantastic reviews saying, this is promising, this is exciting, but let’s not get ahead of ourselves because we learn from our past. I published a paper in JAMA in the early 2000s of CAD that was being used out in community practice. And what we found was the simulated reader studies didn’t translate over to actual real-world mammography interpretation. And that was really disappointing because the hope was, of course, that women were getting their mammograms interpreted at a higher level when they had CAD applied to their mammogram. And we found that wasn’t the case.

We don’t want to repeat that. So these research publications are so important. They go through extensive peer review. Again, the authors are to be commended for saying, this is what we found, here’s the limitations, these are the next steps that are needed.

IRA FLATOW: Well, tell me what kind of research you think it will take to actually improve care with these algorithms.

CONNIE LEHMAN: Here’s the most important part of this paper. It’s a simulation. It was assumed that that first radiologist, if told by the algorithm, you don’t need to worry about calling this patient back, would actually change their mind and not call the patient back. We don’t know that that’s true at all or even if that would be the right thing, necessarily.

It also assumed that if the AI algorithm said, this mammogram is really suspicious, you need to bring this woman back in, that the radiologist would agree. So the assumption that the radiologist would agree to the two extremes and not be influenced at all with the middle range scores of the AI algorithm– that’s just a huge assumption. And I don’t think it’s a reasonable assumption. We’ve never seen consistency in humans integrating feedback into their clinical care pathway decision making.

And just to think that if it was highly likely negative, the radiologist accepts it even if they see something suspicious on the mammogram, or if it’s highly likely positive, they bring the patient in even if they don’t see a lesion on the mammogram to evaluate. And it absolutely requires that we have that prospective study to say, well, what about that fascinating domain of a human accepting or rejecting feedback from the computer?

IRA FLATOW: So what would you say is the takeaway message from this study?

CONNIE LEHMAN: The takeaway message is, using and leveraging the power of AI to triage mammograms that need more attention from those that need less attention– it’s here. This is going to be part of our future in screening mammography, and I couldn’t be more enthusiastic about this roadmap, this pathway that these authors have continued to contribute to with their publication. However, we’re on the road. We’re not at our destination. We’ve got a lot of work to do collaboratively to get to that final destination of showing the actual impact on patient outcomes when we leverage and use these tools in true clinical practice.

IRA FLATOW: And how soon do you think we can look forward to fewer missed cancers by AI and doctors working together?

CONNIE LEHMAN: I think it can be five years if we work together. Research and the science has been very exciting. I’ve seen my colleagues all around the globe, despite the challenges of the pandemic, continue to push this research forward, to work hard. This is what we need more of. And I think we can get there in a short period of time if we do this right.

IRA FLATOW: Well, Dr. Lehman, thank you for your work. We’re looking forward to this, also. Thank you for taking time to discuss it with us.

CONNIE LEHMAN: Thank you so much for having me. It was a pleasure.

IRA FLATOW: Dr. Connie Lehman, professor of radiology at Harvard Medical School and breast imaging specialist at Mass General Brigham in Boston.

Copyright © 2022 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/.


Meet the Producers and Host

About Christie Taylor

Christie Taylor was a producer for Science Friday. Her days involved diligent research, too many phone calls for an introvert, and asking scientists if they have any audio of that narwhal heartbeat.

About Ira Flatow

Ira Flatow is the host and executive producer of Science FridayHis green thumb has revived many an office plant at death’s door.

Explore More

The Messy Math Of Mammograms

Math biologist Kit Yates breaks down the numbers behind breast cancer screenings—and the serious implications of false positive and negative results.

Read More