01/22/26

Deepfakes Are Everywhere. What Can We Do?

Deepfakes have been everywhere lately, from fake AI images of Venezuelan leader Nicolás Maduro following his (real) capture by the United States, to X’s Grok AI generating nonconsensual images of real people in states of undress. And if you missed all that, you’ve almost certainly had your own deepfake close encounter in your feed: maybe rabbits bouncing on a trampoline or an unlikely animal friendship that seems a little too good to be true.

Deepfakes have moved beyond the realm of novelty, and it’s more difficult than ever to know what is actually real online. So how did we get here and what is there, if anything, to do about it?

Joining Host Flora Lichtman are Hany Farid, who’s studied digital forensics and how we relate to AI for over 25 years, and Sam Cole, a journalist at 404 Media who’s covered deepfakes and their impact since 2017.


Further Reading


Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.

Donate

Segment Guests

Hany Farid

Dr. Hany Farid is a professor of electrical engineering and computer sciences at University of California, Berkeley.

Samantha Cole

Samantha Cole is a journalist and co-founder of 404 Media, based in New York City.

Segment Transcript

FLORA LICHTMAN: Hey, this is Flora Lichtman, and you’re listening to Science Friday.

Deepfakes have been deep in the news lately. Right after the US invasion of Venezuela and capture of Nicolas Maduro, fake AI images of him in custody circulated online. Then there was the news that X’s AI chatbot, Grok, was generating non-consensual images of real people in clear bikinis.

And if you missed all that, you probably have had your own deepfake close encounter on your very own feed. Maybe rabbits bouncing on a trampoline, or an animal friendship that seems just a little too good to be true.

So deepfakes are now everywhere. They have moved beyond the realm of novelty. It’s more difficult than ever to know what’s actually real online. So how did we get here, and what is there, if anything, to do about it?

And just a warning, I think this conversation may get dystopic and disturbing.

Here with me now is Doctor Hany Farid, a professor at the UC Berkeley School of Information, and chief science officer at GetReal Security. He’s studied digital forensics and how we relate to AI for over 25 years. And we have Sam Cole, a journalist at 404 Media, who’s covered deepfakes and their impact since 2017, which is before deepfake was even a word.

Hany and Sam, welcome to Science Friday.

SAM COLE: Thank you for having us.

HANY FARID: Good to be with you, Flora.

FLORA LICHTMAN: OK, Hany, let’s start with you. Is it just me, or have deepfake images and deepfake voices and deepfakes now become basically indistinguishable from real media?

HANY FARID: It is not just you. In fact, we have science to answer this question. In my lab here at UC Berkeley, we do perceptual studies. We show people images. Half of them are real, half of them are fake. Or we have them listen to audio recordings of people’s voices. Half of real, half are fake. And most recently, we are just wrapping up a study on full-blown video as well.

And here’s what I can tell you. When it comes to still images, basically, people are at chance. They are very, very bad at it. It is a really hard problem. Essentially, images have passed through what we call the uncanny valley. They have become so realistic that it is almost impossible to reliably tell a real photo from a fake photo.

Voices were a fast follow. Now, if I clone your voice, for example, and play a snippet to somebody, they will not be able to tell it’s AI and they will think it’s you. And video is close behind.

So what I can tell you is we’re a little bit better than chance, but not much, and have me back on the show in six months and it’ll be over. So we know how this is– what the end game is.

Every single piece of content that we see online, purely visually, is becoming indistinguishable from reality. And if I may just add one more part to this, that it’s not just that the technology is very good. It’s that we have become an increasingly polarized society. And this content is being designed to push our buttons. And so we are not at our best when we are doomscrolling through social media. And that adds a whole ‘nother level of confusion to the online spaces.

FLORA LICHTMAN: Sam, what’s your perspective on this? I mean, you’re not a lay user. You’re– I’d put you in the expert class of online consumer. Do you feel like this moment is significantly different than it was a year ago?

SAM COLE: I think definitely. I think the moment that we’re in every month feels different at this point.

FLORA LICHTMAN: Yeah.

SAM COLE: So the moment today is not the moment yesterday. But I think another thing that really adds to all of this is just how prolific and easy to use this technology is now. So it used to be that you needed a really advanced gaming computer and a little bit of skills for coding and a lot of energy to spend all night generating deepfakes.

But now they’re being advertised on social media. You download an app to your phone, you have a picture on your phone of someone’s face– one single picture. Maybe you got it off of Instagram or Facebook or whatever. And then you can create whatever you want, that person, whatever scenario that you want, which, I think, is a whole other level of this technology that we’re just not really ready to grapple with.

FLORA LICHTMAN: Yeah. I mean, I think that leads us right into this recent disturbing Grok news. So this is X’s AI image generator that has been used to create sexually explicit images of real people, right? Fake images of real people. And how did this story unfold?

SAM COLE: This is a problem that’s been going on for a long time. I guess we can go even back to before non-consensual images were synthetic on Twitter, before Elon even bought the thing, where people were sharing these images that were created non-consensually or were abuse images, even, on Twitter. Twitter didn’t how to moderate it, was slow at moderating it–

FLORA LICHTMAN: Using AI? When you say nonconsensual–

SAM COLE: Not even AI.

FLORA LICHTMAN: Just regular.

SAM COLE: So we have this precedent established already, where Twitter, pre-Elon, pre-generative AI, even, is full of abuse imagery. And then what you have after AI, after generative AI, after Elon bought X is a platform where moderation has been gutted and generative AI has come in to fill these gaps where people, like Hany said, want to create engaging content that is mostly outrage bait, and a lot of the time that’s abuse imagery created by AI.

So that gives you a little bit of context for the past couple years. But in the past couple weeks, we’ve seen people creating nonconsensual images with Grok just straight in the X feed. So replying to women’s images– it might be someone posted a vacation selfie, or it might be, like, even just, like, OnlyFans models or people who are online for a living.

And replying to those images in the X feed, saying, @grok, make her wear a clear tape bikini. Make her bend over and face the camera. Anything that you can imagine is– kind of like, the porn category descriptor of things, people have been imagining that and putting it into a Grok prompt.

And this has been happening for a couple of weeks, and then it exploded into virality. Lots of people were doing it. They realized they could do it with Grok. So it’s reached this mainstream level, like you said, where it’s just in your feed, it’s unavoidable, and it’s targeting whoever dares post a picture to X.

FLORA LICHTMAN: Hany, is this legal?

HANY FARID: Wow. Well, it depends who you ask. And it depends on which country. So some countries have banned Grok outright. Here in the United States, the Take It Down Act will go into effect in a few months, which requires platforms within 48 hours to remove nonconsensual, intimate imagery when they are notified. There are issues with that bill, the least of which is that it puts the burden on the victim to police the internet to get their content taken down.

But what is unambiguously illegal is when this is done to children. That is child sexual abuse material, and that is absolutely illegal and reprehensible. And, of course, at the state level, things get more complicated. Some states have laws that make this illegal, others don’t. At the federal level, it’s– so it’s a bit of a mess right now.

But what I can tell you is in the EU and the UK and Australia, there are open investigations into violative content.

But here’s the thing. It’s the internet. So I’m all for the regulation. I’m all for holding these platforms accountable. But it’s gotten very ugly. And the thing that I think you have to understand about what Grok did is, I mean, we started off this conversation by saying, look, this stuff’s been around for a long time.

But they were more bespoke techniques. You had to go– as Sam was just saying, you had to go find this app and you had to go download this code. And there were these more, I would say, fringe types of applications.

But what Grok did is that it centralized the creation, the distribution, and eventually the normalization of this content. And that’s the real sort of sin here, is the way they just made it so easy to do everything at once.

And here’s the thing you have to understand. It doesn’t have to be this way. Go take many of the prompts that you’re seeing people put into Grok AI and try to put them into OpenAI’s ChatGPT or Google’s Gemini, and it won’t work.

FLORA LICHTMAN: There are guardrails that you can build in. if you choose to.

HANY FARID: Exactly, if you choose to– and that’s the important word. right? And so this was a preventable problem. It was also a foreseeable problem. I mean, it’s literally called spicy mode, is what he called it. So we’re not even trying to pretend that we want to protect individuals and children. We are outright giving these weapons to people, and somehow we’re surprised that they’re doing exactly what we tell them to do.

FLORA LICHTMAN: It’s a feature, not a bug.

HANY FARID: It’s a feature, not a bug. Well said.

[MUSIC PLAYING]

FLORA LICHTMAN: I have to take a break, but hang tight, because when we come back, is there anything to do? Stay with me.

[MUSIC PLAYING]

How does the back end work? I mean, how are these deep– is there a simple explanation for how these deepfakes are made and why they’ve gotten so good recently?

HANY FARID: Yeah, so there’s– OK, so there’s lots of different types of deepfakes. So let’s talk specifically about the nudify ones. Yeah? Because I think that’s the brunt of the conversation. So the way these deepfakes are made is you upload an image of a person who, let’s say, is fully clothed. And there’s a couple of steps that unfold.

So the first is that the AI algorithms will automatically detect that there’s a person in the image. It will separate their head from their body. So it knows where your neck and the head above is. And it leaves everything from the neck up and the entire background alone.

And then it takes from the neck down, and it essentially removes all those pixels. And then it hands that image to an AI that says, OK, fill this in with a nude body or a bikini body. And so if you’ve ever been on one of these image generators, you can type “give me an image of” and you can give a descriptive prompt. So here, the prompt is simply, “create a body that is a bikini or nude.”

And the AI systems have been trained– in many cases, they use what are called foundation models, which are general purpose image creators that are then customized and trained on lots and lots of explicit content so it can make nude bodies.

And by the way, usually can only work for women. Sam has noticed this in the past in some of her writing, that these things don’t do so well with men. It actually works because most of the training is on women’s bodies.

FLORA LICHTMAN: Oh, wow.

HANY FARID: And of course, everything from the neck down is a synthetically generated. But the important part here is the person is still identifiable, because the AI leaves the face and the background fully intact.

So it would be one thing if what we were doing here was doing AI-generated explicit material where nobody’s identifiable. But that’s not what we are doing. We are taking somebody’s identity and creating them in an explicit pose or act. And then, of course, on Grok, it is then being shared in that person’s feed so that it is being weaponized against them.

And this is fairly well-established technology. It’s been around for a long time. It has been getting better and better and better, because the models are being trained on more and more data.

FLORA LICHTMAN: OK. Let’s get into it. What is there to do? Hany, let’s start with you. And then I want to hear from you, Sam.

HANY FARID: OK, so there’s a couple of sledgehammers, if you will. So we can try the regulatory path. But you and I both know that that’s going to be slow and fraught and imperfect at best. If for no other reason, the lobbyists will make sure that they do what they do to water down any bills that happen. At the international level, I think we’re seeing more pressure from the regulatory side of things.

I would like to see this being dealt with the courts. I think we need to start suing these companies for the harm that they’re creating. Because the fact is that if you sue companies for creating products that are harmful, they will internalize that liability and they will start to create better and safer products.

I also think we shouldn’t let off the hook the entire technology ecosystem that empowers this. So that means, ads, the financial institutions that allow these services to monetize–

FLORA LICHTMAN: Wait, say more about that. What do you mean exactly?

HANY FARID: Well, OK. So go to X, and how does X monetize? Well, they’ve got the Pro accounts, but they also have advertisers. The content that we are talking about literally has ads running against them. Why are companies allowing their ads to be run against this? They’re the ones fueling this.

This app, which violates the terms of service of Apple and Google, is still in the app store. Why are you empowering that?

Now go outside of X and go to websites that are explicitly and uniquely designed to nudify images of women and children. They will have a little icon that says Visa, Mastercard, PayPal. Why are the financial institutions allowing these services to use them?

So there’s an entire ecosystem here that is propping up these bad actors, and we should also hold them accountable and tell them, hey, if you pull your services from these bad actors, well, we can knock them off the internet.

So I think that there are also– the last thing is there are technological interventions here. These are the easy things, though. The problem is that there’s no will at places like X to deploy them. But we know how to make these products safer. Elon Musk is simply choosing not to do that.

FLORA LICHTMAN: Sam, what’s your perspective on this?

SAM COLE: So I think a lot of what I’ve been mulling over and seeing really echoes what Hany just said. It’s like, we need to be stricter about what we’re allowing in the app store, even though these applications are usually very strictly regulated and enforced in the app stores.

Apple does not mess around with porn apps. It does not allow a lot of other types of apps that touch into sexuality on the app store. But Grok is still there. And people are making some of the worst stuff that is worse than what’s in the X feed in the application that’s on the Apple app store. So why is it still there? I have no idea. It’s very blatantly against their terms.

FLORA LICHTMAN: Well, do you have– why is it still there? Do you have a hypothesis?

SAM COLE: I mean, I assume because of who owns it. I assume because the richest man in the world owns it, and he has a lot of pull, a lot of power, unfortunately.

HANY FARID: And I’ve heard the line, by the way, too big to ban.

SAM COLE: Yeah.

HANY FARID: You can ban the small indie apps, because who’s going to say anything? Try to ban X from the richest man in the world and see what happens. I think that’s a bad reason, by the way. But I think that is a reason that has been given.

FLORA LICHTMAN: Yeah, I just read a story about high schoolers making explicit images of their classmates. Which is, like, it’s just, like, cyberbullying on a new and such a disturbing level. Is there anything that individuals can do to protect themselves?

HANY FARID: No. This is the sad truth. It’s just– I’m sorry to be the bearer of bad news. But here’s the thing. 10 years ago, the people who were vulnerable to this type of content were high profile people. The Scarlett Johanssons of the world that had hundreds and hundreds and thousands of images of their likeness online.

But what has happened is the technology has gotten so good that I need a single image of you, ten seconds of your voice, and I have you. And so what you are essentially telling women and young girls is you have to be invisible on the internet to be safe.

FLORA LICHTMAN: Which is impossible.

HANY FARID: It’s impossible. And even if it was possible, it’s ridiculous.

SAM COLE: And I also– the reason that, especially sexually explicit deepfakes are useful is that the worst thing you can be as a woman in this society is a woman who is in control of her sexuality, and especially a porn performer, a sex worker. So I think we need to take away a lot of that stigma of being an adult content creator, an adult performer, or even just a woman on the internet, and figure out how to have that conversation with young people, especially.

And you mentioned middle schoolers and kids using this in school. And it’s like, if we have conversations about consent, bodily autonomy, and what is and isn’t cool to do to other people’s images at a very early age– age-appropriate, obviously– then I think we could get somewhere in, like 10 to 15 years.

But obviously, that’s a much longer term and harder problem when the problem is happening right now. And it’s severe right now. And we do need all these other things like guardrails and regulations and good laws. But in the long term, I think this is symptomatic of something else going on socially.

FLORA LICHTMAN: What about posting pictures of your kids online? What are your thoughts on that?

HANY FARID: Stop, please. Just stop. I mean, this is incredibly– I mean, again, I don’t want to be the grouch in the crowd, but you gotta know that there are really, really bad people, and there are not few of them, who are taking those images and doing awful things to them. And one of them, which we have seen over and over again– I mean, the best case scenario is that they take those images, they nudify them, and they share them online. That’s your best case scenario.

The worst case scenario is they send them to your child and start extorting them– which has, by the way, happened, which has led to children taking their own lives. It is horrific. This is not a place where you should be posting photos of your child. I just– the answer is just no. This is an easy one.

FLORA LICHTMAN: Sam, let’s talk about the human side of this. Have you spoken with victims of these deepfakes?

SAM COLE: Yeah. Yeah. People who are targeted by deepfakes often say that what they want the most is for this content to stop spreading. Victims say that when this happens to them in a really severe way, they lose job opportunities. They lose the ability to talk online with other people. As Hany mentioned, free speech goes out the window, because it’s silencing people who otherwise were trying to have a normal online experience and now are not wanting to pitch in on conversations or post anything. Their accounts go private. They have to lock it down.

So I think it’s such a chilling effect on women’s speech, in particular. It makes you think twice about whether or not you want to post that picture, because some creep might reply to it and say, “make her bend over backwards in a clear tape bikini.” People who are targeted often say, I don’t know who makes this content. I don’t if they’re my neighbor. I don’t know if they’re my my coworker. I don’t if they’ve seen it. It’s like, I don’t know if my classmates have seen it or if they’re the ones making it.

And in that way, it makes their online life and their real life in person harder and harder to live. And it’s just such a– it’s such a hard problem to get back in the tube, so to speak. Once it’s out of the bottle, it’s out.

FLORA LICHTMAN: What will you be looking for in the next few months? Sam?

SAM COLE: [SIGHS]

I’ve kind of stopped trying to figure out what’s next.

[LAUGHTER]

I don’t even know any more.

HANY FARID: Can I say that the humph was really on–

SAM COLE: Deep sigh.

HANY FARID: It’s perfect. It’s perfect.

FLORA LICHTMAN: There’s a lot of oomph’s, humphs in this conversation, I must say.

SAM COLE: Yeah, I mean, I’ll see something horrible and then send it to Hany. That’s what I’ll be doing in the, probably, next couple of months, and be like, what do you make of this? And then we just do it all over again.

But yeah. I mean, I think what I’ll be watching to see is, are there going to be any repercussions, any accountability for this having happened? And if not–

FLORA LICHTMAN: Like, is this a tipping point?

SAM COLE: Yeah. And if it’s not, it’s a free-for-all. The shark has been jumped. It’s so over. If nothing happens, accountability-wise, to these platforms that are very much in the mainstream, making this content and facing no repercussions, even really basic ones like getting removed, or getting even suspended from the app store until they clean it up, would be very basic.

FLORA LICHTMAN: Hany?

HANY FARID: I– I’m not particularly hopeful that we’re going to see any real leadership here in the United States. But I’m hopeful, and what I’ll be looking for is how the UK, the EU, Australia responds. There’s already open investigations. Many of the European countries have responded quite strongly, but now we need to actually do something. And if we don’t get leadership out of those parts of the world, I don’t see it coming out of the US.

And then, as Sam said, I think this is going to send a message to everybody in Silicon Valley that it’s a free-for-all. Do whatever you want. Even child sexual abuse now is not, apparently, a crime in Silicon Valley. And I think that’s going to end very badly for everybody.

FLORA LICHTMAN: Doctor Hany Farid, professor at UC Berkeley, and Sam Cole, journalist at 404 Media. Thank you both for joining us today.

SAM COLE: Thank you.

HANY FARID: Thanks for the conversation, Flora.

FLORA LICHTMAN: This episode was produced by Dee Peterschmidt. And before we go, I wanted to read a review from ReadsO2.

The review said, “Every time I listen, I learn something new and I feel more hopeful about the world.”

That part may not apply to this episode.

HANY FARID: Sorry

FLORA LICHTMAN: But thank you, Reads. We really appreciate it. I’m Flora Lichtman. See you tomorrow.

[MUSIC PLAYING]

HANY FARID: That was funny, Flora.

[LAUGHTER]

FLORA LICHTMAN: I suspected it might get dark, so thank you for chiming in. Please come back. I feel so concerned about this. And we didn’t even get to all of my post-truth questions about whether–

HANY FARID: Oh, yeah.

FLORA LICHTMAN: –democracy can endure this. There’s just–

HANY FARID: It can’t.

FLORA LICHTMAN: –so much more to say.

HANY FARID: No, we’re totally [BLEEP] I mean, we’re totally [BLEEP]. I mean, honestly.

FLORA LICHTMAN: Can we put this in after the credits, Hany? We’ll just bleep you.

HANY FARID: That’s the TLDR.

SAM COLE: It’s so over.

FLORA LICHTMAN: Oh, God.

Copyright © 2026 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Dee Peterschmidt

Dee Peterschmidt is Science Friday’s audio production manager, hosted the podcast Universe of Art, and composes music for Science Friday’s podcasts. Their D&D character is a clumsy bard named Chip Chap Chopman.

About Flora Lichtman

Flora Lichtman is a host of Science Friday. In a previous life, she lived on a research ship where apertivi were served on the top deck, hoisted there via pulley by the ship’s chef.

Explore More