Can AI Make Medicine More Personal?

29:24 minutes

Listen to this story and more on Science Friday’s podcast.

3D rendering medical artificial intelligence robot working in future hospital. Futuristic prosthetic healthcare for patient and biomedical technology concept.When you go to the doctor’s office, it can sometimes seem like wait times are getting longer while facetime with your doctor is getting shorter. In his book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, cardiologist Eric Topol argues that artificial intelligence can make medicine more personal and empathetic. He says that algorithms can free up doctors to focus more time on their patients. Topol also talks about how AI is being used for drug discovery, reading scans, and how data from wearables can be integrated into human healthcare.

Read an excerpt of Topol’s new book here.

Further Reading

Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.


Segment Guests

Eric Topol

Eric Topol is the author of several books, including Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019) The Patient Will See You Now: The Future of Medicine is in Your Hands (Basic Books, 2015), practicing cardiologist at the Scripps Clinic, and a genomics professor at the Scripps Research Institute in La Jolla, California.

Segment Transcript

IRA FLATOW: This is “Science Friday”. I’m Ira Flatow. When you go to the doctor’s office, it seems like the face-to-face time is getting shorter and shorter, isn’t it? Sometimes you’re in and out in under 10 minutes. The experience can feel very impersonal. 

My next guest says that one of the keys to bringing back the human touch to medicine is AI, artificial intelligence. Sounds counter-productive or counterintuitive, doesn’t it? That an algorithm can increase empathy. But the less time that your doctors need to deal with charts or sorting through conflicting diagnoses, the more time they have to spend with you. 

Plus, now you can track your own heart rate with your smartphone. Computers are reading medical scans and detecting cancer. Will this new data make medicine more personalized? Or will it be information overload? My next guest says AI can give you more time with your doctor if it’s done correctly. 

Dr. Eric Topol is here to talk about all of this. He’s a cardiologist, and author of the book, Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again. Welcome back to Science Friday. 

ERIC TOPOL: Thanks Ira. Great to be with you. 

IRA FLATOW: You say that patients exist in the world of what’s called shallow medicine, insufficient data, time context, and president– and presence. What do you mean by that? 

ERIC TOPOL: Well, shallow refers to that lack of human bond. The very limited time to see a patient, limited time to formulate a diagnosis, review the data, have the context. And most of all, the deterioration of the relationship between patients and doctors. That’s really suffered over time because of the big business of medicine. 

IRA FLATOW: So what are the consequences of practicing shallow medicine? 

Well, besides the fact that doctors and all clinicians become data clerks and are tethered to keyboards, the patients suffer because the diagnosis rate is alarmingly high. Over 12 million serious mis-diagnoses a year. And as you know, the errors that occur in medicine are one of the leading causes of even death, no less other complications. So we have lots of mistakes, and we have this broken bond between patients and their physicians. 

IRA FLATOW: I want to get into that, but let me give out our phone number first. 844-724-8255, if you’d like to talk with Eric about his book Deep Medicine. 844-724-8255. Or you can tweet us at scifri. Let’s get into that. Because you say that is one of the main themes of your book, is the bond– the breaking of the bond between doctor and patient. 

ERIC TOPOL: Right. That is because, for example, the lack of even eye to eye contact in those limited minutes. So that is a real hit. You can’t– both on the patient side and on the doctor side. Beyond that, if– we know from how an expert diagnostician makes a diagnosis, if they don’t have it within five minutes, the chance of it being accurate is 28%. So that’s about the time that you actually have with a patient. So we can do better than this. We can transcend this problem of burnout and depression among clinicians who feel that they aren’t unable to do what they went into medicine for in the first place. 

IRA FLATOW: You know, we think that just the opposite, that if we bring computers in it’s going to make the time less that you have with your doctors, because the computers will be doing the work. But you’re saying just the opposite. Because the computers may do some of the original scanning, for example, that you as a physician may have more time to spend talking to the patients. 

ERIC TOPOL: Right. Well, there’s so much data that no human being could have their arms around it for each person. We’re talking about terabytes of data between the records, and the scans, and sensors, and genomics. All these things together. So that’s really a critical aspect. But the other thing is speech recognition is so advanced now. There are over 20 companies that are already starting to get into clinic, to use the voice, to synthesize the note, and also, you know, whatever needs to be done after the visit. To basically liberate from keyboards, which are mutually hated by both patients and doctors. 

IRA FLATOW: The three principal– three components of deep medicine, you write, are deep phenotyping, deep learning, deep empathy and connection. Tell us about those three things. 

ERIC TOPOL: Well, the deep phenotyping refers to gathering all that data that’s appropriate for each individual. And so, today we don’t do a very good job of that because the data is in so many different places. Each person goes to lots of different doctors and health systems. And we’d like to have that from the time a person is in the womb, until the present moment. But that’s deep phenotyping. And that would include all the medical literature about a person’s condition. 

And then basically deep learning is what’s so exciting today. This is the most radical jumps we’ve seen in the history of AI, which has been going on for decades, is deep learning. Which is this neural network that can process data with remarkable accuracy for speech, for images, and for text. And so if we use that appropriately we can outsource to get to this deep empathy state, which is to restore medicine the way it used to be decades ago when there was this precious relationship with the presence, with the trust, and with the really tight bond. 

IRA FLATOW: You write– it was so surprising to read this in the book, that you write about the failure of electronic health records to actually live up or actually be very, very helpful. You say the use of electronic health care records leads to other problems. The information they contain is often remarkably incomplete and inaccurate. Electronic records are very clunky to use. And most on average of 80% of each note is simply copied and pasted from a previous note. So mistakes go along with it. 

ERIC TOPOL: Right. Ira, it’s amazing. You know, that’s been documented in the literature, in recent literature, that these notes are error laden. And the errors just get propagated from one note to the next. The software is just beyond the clunky description. I mean, I recently had to get retrained in Epic. 25 hours of training to use the software I mean, this is amazing. And it’s just so many steps to do such simple things. 

This would never work in the real world of technology, but it’s the way we have it. Companies like Epic, Cerner, and many others, that’s the way medicine is practiced and burdened in this country. It’s been an abject failure, without question. 

IRA FLATOW: I want to get into that a little bit more. But Matt in South Bend, Indiana has a question related to that. Hi, Matt. 

MATT: Hi. 

IRA FLATOW: Go ahead. 

MATT: Hey. Well, thanks for taking my call. Yeah. I’m just calling about– the question is regarding the data itself. Where is the data stored? You know, and is it something that’s automatically happening when I go to my doctor? 

ERIC TOPOL: Right. That’s really important, Matt. I think where we want to be is that each person owns their data. That ought to be a civil right. Because no one has all their data, and it’s coming from multiple sources. It used to be, you know, just with the doctor’s office and the hospital. And it was hard to get to. But now it’s with your sensors, increasingly so. It’s with your genetic studies, your microbiome, and so many other different places. Even environmental sensors. So that all– all that data belongs with you. You have the most vested interest. It’s your body. You’ve paid for it. And we’ve got to get there. 

IRA FLATOW: Let’s talk about– 

MATT: And by the way, you can’t have AI without great input. Deep learning requires all your inputs to get that output to help you, whether it’s prevent an illness, or to manage something that you’re working with, you know, a condition that you have. 

IRA FLATOW: Wasn’t that the original computer mantra when I was young? 

MATT: Right. 

IRA FLATOW: Garbage in, garbage out, [INAUDIBLE]. 

ERIC TOPOL: You got it. Yeah. And if we don’t have a complete data set, you know, that’s what I opened the book with, you know, being roughed up with my knee replacement. And the doctor looking for– the surgeon didn’t have all my data. And that led to some serious adverse outcome for me. 

IRA FLATOW: Let’s talk about that, in particular, about your question about deep phenotyping. Is it– are we going to be entering a day where you would bring into your doctor’s office, if it’s not going through their health records, a thumb drive that they will have your whole genetic score on there. And, you know, can look to see what’s the best personalized medicine to give you? 

ERIC TOPOL: Well, you could do that. Or you could, of course, transfer it in advance of a visit. But the point being is this algorithmic processing of that data, which always will require human oversight. But that will be, you know, distilled. We just don’t have time to go through– get this– it’s voluminous. And so you want to have this distilled as much as possible. 

IRA FLATOW: All right, man. And what would then– how would the doctor then be a co-partner with deep med– with the data. Give us an idea how they would work together, and what kinds of different tasks each would do. 

ERIC TOPOL: Well, there’s lots of different scenarios. You know, right now we already are seeing radiologists having the scans read first by a deep learning algorithm, which assures that it won’t miss things, like a nodule on a chest X-ray, or a fracture in a wrist, or things like that. So that’s one scenario where it’s a pre-read, but then you have the context of the radiologists and experience to provide that oversight. 

There’s many different ways that this can be folded in. You may have your data being monitored. Let’s say you have a condition like diabetes, and you have various things that your data are coming in, like just not your glucose, but your sleep, your physical activity, your stress level, what you’re eating. And it’s basically coaching you as to have better glucose regulation. These are what we call multi-modal data inputs, but that’s the best way. As opposed to today, where we have these dumb algorithms that just tell you whether your glucose is going up or down. So we can do so much better than that in this deep learning algorithmic era. 

IRA FLATOW: And when the computer makes that first pass, then the doctor has more time to talk to you about what the meaning of that result is. 

ERIC TOPOL: Right. And I think what’s so astounding, really, Ira, is the fact that we can train machines now to see things that humans will never see. And that’s really quite extraordinary. It’s almost as if things you would never have thought would be possible are now attainable. So whether it’s showing a retina picture to top retinal specialists, and when they look at it they– is this from a man or woman? They have a 50% chance of getting that right. But the machine algorithm is over 97% accurate. And so many things like that. So for the gastroenterologists, polyps are frequently missed. But now they can be– machine vision can find them all. 

IRA FLATOW: My guest is Eric Topol, author of Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again. We’re going to take a break. When we come back and talk more with Eric. Our number, 844-724-8255. Stay with us. We’ll be right back after this break. 

This is Science Friday. I’m Ira Flatow talking with Dr. Eric Topol, author of Deep Medicine. Our number, 844-724-8255. Let’s go to the phones. Yeah. Let’s go to Seshi in San Antonio. Hi, Seshi. 


IRA FLATOW: Hi there. Go ahead. 

SESHI: So I am a third year medical student, Dr. Topol. And this seems to pertain to the distant– or near future of my education and career. My question is, well, initially electronic health records were thought to be, you know, the save-all. We’re going to get all the patient’s information. It’s going to help us. And then we get dozens and dozens of companies creating it. And now we all have a whole other beast to deal with. 

How do you foresee the development and implementation of AI? Do you feel that this should be a public endeavor where it can integrate all these different electronic health records or inputs of information? Or do you see the private industry, again, taking over and possibly creating another beast of various AI’s that various hospitals use? 


IRA FLATOW: Thanks for your question. 

ERIC TOPOL: That is a great one, Seshi. I wish I could go back to third year med school. Because the medicine in the future is going to be so much better in this regard. But what we had, the debacle that occurred with the electronic records, was just unacceptable software. But now we have the tech titans, and so many really innovative start-ups that are in this space. And the kind of functionality and user interface, whether that be for doctors, and now for patients. The problem really gets down to that these EHRs were made for billing. 

They had no business– I mean, all business, but no patient care. No– there was nothing patient centered about them, or doctor centered, for that matter. So this was the fiasco, I think, explained. Now we’re changing that. And we’ve already seen in other countries that they’ve been able to adjust with the software to be patient centered. So I do think it’s achievable. 

This is basically software algorithms dealing with data. And I think one thing to keep in mind, Seshi, is that we as people have early satiety with data, but when you get working with the right algorithms it has insatiable hunger. Can’t get enough data. And it can do things that we’ll never be able to do. 

IRA FLATOW: There is a trend in the internet of things. There’s so many personal sensors now. I know that you’re very familiar with these, on our phones and watches. In fact, my brother was motivated to go to the hospital when his Apple Watch showed heart fibrillations. Aren’t these sensors good ideas? You know? I mean– 


IRA FLATOW: I know that you’ve worked with sensors. And actually, I remember watching you tweet how you diagnosed one of your own kidney stones with your own sensor. 

ERIC TOPOL: Right. Yeah. 

IRA FLATOW: So what is the path we should take? Should we depend our lives on these sensors and feed data in. Or are we going to be too dependable on them? And what’s the best way to integrate them? 

ERIC TOPOL: Right, Ira. I think the problem with sensors is the appropriate use. So if you have risk for atrial fibrillation or symptoms, and you’re in or a group of people that it would be high suspicion, that’s one thing. But it might not be something– an atrial fibrillation detection watch– for everybody. 

When I diagnosed on my smartphone that I had a dilated kidney, you know, when I showed up to the emergency room. And the emergency room doctor thought I was an alien when I showed him the picture. And still sent me for a CAT scan. So that kind of shows you– that’s emblematic of not fully trusting the sensors and the things that we’re working with today. But I think over time we’ll figure out really who are the right people, the right circumstances to apply these things. We don’t want it done in a willy nilly way. Because then you just wind up with more incidental findings, more trouble. We have to be really– particularize the way we apply things. 

IRA FLATOW: I want to talk about what I mentioned at the top of the hour. I was talking about radiologists who couldn’t see a man in a gorilla suit on a scan. That really happened. 


IRA FLATOW: Tell us about– 

ERIC TOPOL: No, it’s an experiment– quite an experiment where it shows that humans, and in this case was radiologists, their attention, their ability to see things can be impaired. And they missed a gorilla suit. You know, 80 some percent of the time. Now, why is that important? Well, we get tired, doctors, and nurses, all clinicians. We have bad days. You know, we have moods. We need time off. And of course, machine algorithms can take on things all the time. They can get sick too, of course. But for the most part, they’re not distractable. And they can get trained. 

I think one of the things to note is that they have exceeded already so quickly what we had expected to see in a health care scene. And it’s just going to get more impressive over time. A lot of this still needs validation, replication, and surveillance. But I think this is– the point that’s quite noteworthy is, you know, people can only do so much. And we need the complementarity. It’ll augment human performance. And then as I mentioned earlier, outsourcing, so we can have that human to human bond. 

IRA FLATOW: Now you did say before the– when we were getting to the break, that AI was able to see granularity in the data. And that we can’t– that people can’t see. Picking out that stuff that’s very, very hard to see and would be very significant. 

ERIC TOPOL: Yeah. No. The data, a flood from high resolution images, from continuous sensor– wearable sensor output, from the electronic records, from all these other sources, is overwhelming. And in general, you know, the whole world where already exceeding yottabytes. We’re moving into– we need hellabytes. 

You know, it’s really– so we need help. And this is a rescue for that inability to cope with this overwhelming flood of data for each person. I, mean each person is into high numbers of terabytes already today. And that’s just going to increase. 

IRA FLATOW: Here’s a relevant question to that. Mike in Sugarland, Texas, welcome to Science Friday. 

MIKE: Thank you. 

IRA FLATOW: Go ahead. 

MIKE: So the question is, I am in AI and ML field. I’m an IT. I deliver of deep learning. So I get that part. The question is, we have so many data coming in from Fitbit, Apple Watch, each specialist, each hospital system has its own data system and repositories. So how are we going to get all this data into one analytical repository so we could do this deep learning? 

ERIC TOPOL: Yeah. Great question, Mike. It’s been done in Estonia, of all places. They have all the data sits on– every citizen their owns their data on a blockchain format. And it’s continually updated. If they can do it, I think we can do it. 

But you’re absolutely right. This is the problem we have right now. Things are so fragmented. And in order to, as you know, get it to work through the neural net, you’ve got to have those inputs. And we are not well positioned for that, to help each person. It’s really a vital step that’s necessary. 

IRA FLATOW: Well, do you think that AI and these systems will be adopted faster in countries that have universal health care where things may be more centralized? 

ERIC TOPOL: Absolutely. You know, I just finished a year and a half commissioned by the UK government to work with a team to review the NHS. And I saw they already are taking off with AI. They already in emergency departments using voice to synthesize notes, and not using any keyboards. And so they’re planning ahead for the AI workforce, which is going to have very substantial impact. 

So universal health care does help this. In China, where they have all the data for each person– of course, that brings up the issues about privacy– but they are moving much faster, because they have it all in one place. And they are way ahead in implementing AI. 

IRA FLATOW: Justin in San Antonio, Hi. You’re next. Welcome to Science Friday. 

JUSTIN: Hi. Yes. Actually, right off of that comment, are you concerned that we don’t have the civil rights in place to deal with this type of technology coming into mass use? And also, are you concerned about the society as a whole relying more on diagnostic medicine because of these extreme improvements in the diagnosis? Thank you. 



IRA FLATOW: As opposed to preventive medicine, I guess. You know? 

ERIC TOPOL: Well, I mean, I think I am worried and wrote quite a bit in Deep Medicine about the 27 reasons why everyone has to own their data. We just talked about how we can’t really do AI well without that. So this is something that we have to support. It’s going to require activism. But eventually it’s– because a lot of the data today for each person is homeless. 

You don’t have your sensor data sitting in your electronic record. And you don’t want your genome sequence or other genomic data in your electronic record. So we don’t you have a place for it all. And we need that. 

So, you know, eventually we’ll get there. But in order to get to that dream of prevention, that we will know a risk of many common conditions early in one’s life. But in order to actually prevent it, you have to have that data continuously brought in. So that the neural net can work with it. And so step one is having all your data. And, at least in this country, very few people, if any, have that. 

IRA FLATOW: I have a tweet from Kelly who says, will AI really lead to a doctor spending more time with patients? Or will they just schedule more patients per hour? 

ERIC TOPOL: Yeah. Well, you know, Kelly and Ira, that’s my fundamental concern. A lot of people, as I am, are worried about privacy, and we’re seeing inequities, and security data, and, you know, bias. But the biggest thing for me is if we don’t stand up for patients, this is the time. Because there’s going to be this big revving up of productivity, efficiency, workflow. And if we don’t say that’s got to be the gift of time to spend with patients, which is so vital and has been lost, if we don’t do that, we’re going to lose perhaps the biggest opportunity that we’re going to see for a long, long time. 

IRA FLATOW: Yeah. You see that as one of the main messages of your book. The takeaway you hope people have. Let’s talk about nutrition. Because you mention nutrition as one area where the guidelines keep changing. How can AI be used, you know, in nutrition? 

ERIC TOPOL: Yeah. It’s fascinating, Ira, because the chapter on deep diet AI I get into the point that we didn’t know how to individualize a diet until we had machine learning. And the group in Israel at the Weizmann Institute led by Aaron Siegel, what they did is they now have studied thousands of people. They got all their data that we’ve been talking about, plus their gut microbiome, glucose sensors, everything they eat. 

And what they were able to show is you could predict from all that data what would be good for you to avoid glucose spikes after you eat. And that’s something that, although we don’t know that getting rid of these spikes when you eat will prevent diabetes, but it certainly is suggestive. And of course, now we’re learning about other things that are very heterogeneous. 

So if you and I eat the exact same thing, the exact same amount and time, we would have very different glucoses in response, and triglycerides, and other labs. And so the question is, can we individualize a diet? And we’re chipping away at it, but if it wasn’t for AI we wouldn’t have been able to bring all this data together to fashion, to have a bespoke diet if one wants to follow it. 

IRA FLATOW: I’m talking with Eric Topol, author of Deep medicine on Science Friday from WNYC Studios. You know, if you can have a book about AI be a page turner, Eric, you certainly have done it. Let me go to the phones to John in Denver. Interesting question, John. Welcome to Science Friday. 

JOHN: Hi. Thank you, Ira. Yeah. I read an opinion piece in the New York Times back in January by Dr. [INAUDIBLE], assistant professor of Health Policy. Anyway, the gist of his opinion piece was that if we incorporate into AI some of the biases we already have, racial biases or sexual biases, we know for a long time the medical profession has struggled getting accurate research done for, say, women for heart attacks, or other minorities not included in research studies. Is there a possibility that AI could kind of incorporate those biases without us even realizing it? 

ERIC TOPOL: Well, you’re absolutely right, John. But it isn’t the AI that’s doing it, it’s us humans. So all that bias is what is part of the input. And so that’s the problem. If we put– and it’s already– you know, I go through many examples in the book of where that bias has shown as inputs. 

And of course, you can expect that the bias is coming out as well through the neural net. So I think this is something– it’s interesting, all these problems of AI, now they’re starting to use AI to deconstruct to prevent the bias from being inputted. And so that’s going to be interesting to see, if we can get our arms around it. But this is a serious problem. 

IRA FLATOW: And let me– in a couple of minutes we have left, we have a lot of people that have tweeted this. They want to know from you, Eric, what is the roadmap? How do we get all of this done like Estonia did, for example? 

ERIC TOPOL: Right. Well, there’s lots of different things that we need to do. But the biggest thing for sure is that as we embrace this potential– this rescue for so many clinicians, and for patients, together. If we have patients taking on more responsibility and charge with their data that they’re generating, and doctors outsourcing some of the things that they don’t do well, or don’t want to do, like being a data clerk, we can see this flywheel effect. So the biggest thing is we’ve got to stand up and use this properly to get back the care and health care. 

IRA FLATOW: And you’re saying– 

ERIC TOPOL: And we can do this. 

IRA FLATOW: –that doctors have to stand up themselves, like the Parkland high school students stood up. As you say in your book, the doctors have to take the lead in this. 

ERIC TOPOL: Yeah. We just lie down when EHRs came around, and all these other things. But as you’ve seen recently, when doctors stood up for stay in my lane, for the guns and the NRA, we can do this. And I think it’s going to be vital if we’re going to get this moving in the right direction. Because it could make things worse. We’ve got to acknowledge that. When you have all this benefit that we don’t actualize for patients. 

IRA FLATOW: How do you make this a campaign issue in the upcoming– 

ERIC TOPOL: Ha ha. Well, you know, we don’t have in this country universal health care, which we need to have. And we also need to get– honor the fact that each person should own their data, and they shouldn’t have to struggle so hard to just get little pieces of it, which is just absurd. 

So this– it’s not just about the universal health care that our country is an outlier, with the worst outcomes of the 37 richest countries, and the only one that has such gross inequities, but it’s all these potential benefits that can be derived from providing care for each person. But not just health care, also the care. 

IRA FLATOW: The book is Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again. Dr. Eric Topol, a cardiologist. And you can read an excerpt from his book on our website at sciencefriday.com/deepmedicine. Eric, fantastic book. I mean, there’s so much information in there. It’s a great read for everybody, and certainly for the health care profession.

Copyright © 2019 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Alexa Lim

Alexa Lim was a senior producer for Science Friday. Her favorite stories involve space, sound, and strange animal discoveries.

About Ira Flatow

Ira Flatow is the host and executive producer of Science FridayHis green thumb has revived many an office plant at death’s door.

Explore More