03/12/26

How Is AI Being Used In The Iran War?

The military use of AI is capturing headlines this month. After a dustup with the Pentagon, the AI company Anthropic is out, and OpenAI is in. Meanwhile, in the US war with Iran, AI is being deployed in ways we’ve never seen.

To make sense of it all, Host Flora Lichtman talks with journalist Karen Hao, who covers AI and is the author of the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.


Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.

Donate

Segment Guests

Karen Hao

Karen Hao is a tech journalist and author of the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

Segment Transcript

FLORA LICHTMAN: Hey, I’m Flora Lichtman, and you’re listening to Science Friday. The military use of AI is capturing headlines this month. The dustup at the Pentagon with AI company Anthropic out and OpenAI in. Meanwhile, a new war where AI is in use. What do we make of all of it? We knew just who to call. Karen Hao is a journalist covering AI. She’s written for The Atlantic and the Wall Street Journal, and she’s the author of the book, Empire of AI, Dreams and Nightmares in Sam Altman’s OpenAI, which takes you behind the scenes on the rise of one of the most powerful startups ever. Karen, thank you so much for being here.

KAREN HAO: Thank you so much for having me.

FLORA LICHTMAN: You are deep in this world. I have this feeling, in this moment, of AI’s power and reach snowballing. But what is your impression of this time?

KAREN HAO: Oh, my gosh, yeah. I mean, when I was working on my book and using the metaphor of empire to try and contextualize the sheer power consolidation that’s happened within these companies, I was not envisioning the fusion of this technology with the military and the alliance between Silicon Valley and Washington. And it feels startling that metaphor has come to be the only metaphor that we can really use to understand this moment.

FLORA LICHTMAN: Well, it’s not a metaphor anymore. It’s literal.

KAREN HAO: It’s literal, yeah. Yeah, that’s right. And I did not anticipate that happening.

FLORA LICHTMAN: Let’s talk about the war in Iran. Do we how AI is being used?

KAREN HAO: There has been reporting from The Wall Street Journal and The Washington Post that says that Anthropic’s AI model, Claude, was essentially used to analyze a bunch of intelligence data and then identify targets to bomb. And The Washington Post specifically said that there were around 1,000 targets that it identified. And one of the things that’s deeply disturbing about this is that large language models, which is the technology behind Claude, is a very faulty technology. It is not accurate.

That’s why sometimes when you are chatting with ChatGPT or chatting with Claude, when you try to get it to talk in more detail about something that you have expertise in, it starts to even make up that it knows those things. And in the military context, that doesn’t go away. And so we had news reports about this horrific bombing that then happened of a school in Iran. And not just one bombing, but two. So when the first responders and parents rushed to the site to try and save anyone that was still alive, they got bombed too. And there is speculation that it’s because Claude misidentified a civilian target as a military target.

KAREN HAO: Right. Though, just to clarify, it’s unclear if AI was to blame in this strike. And on Wednesday, US officials said that it was unlikely, according to New York Times reporting. So we don’t know.

KAREN HAO: We do not know, right. But it is like such a legitimate possibility that it’s like it perfectly encapsulates what is going on right now. That there’s just like, so much uncertainty and there’s so little transparency, so little accountability in just extraordinarily grotesque actions that are happening and mass life death decisions that are being made under the veil of secrecy. Yeah, it’s just really awful.

FLORA LICHTMAN: Isn’t this one of the sticking points between Anthropic and the Pentagon that they said Claude isn’t ready for? And this is different. This is autonomous weapon use. So this is not just identifying targets. But also the Pentagon seems to be using Claude, but also blacklisting Claude. I’m confused.

KAREN HAO: Yeah, there’s so much going on. So Anthropic was the first company that got permission from the Pentagon to be used on classified intelligence systems. And so the last maybe around nine months, that has been true. And the Pentagon then became, it seems, quite reliant on using Claude. Because then when Anthropic and the Pentagon started fighting over the fine grained details of how exactly Claude should be used, the Pentagon then did like the nuclear option and was like, we are going to either force you to bend to our wishes, or we are going to declare you a supply chain risk.

But after they declared Anthropic supply chain risk, they’re still reliant on the technology. So there’s a six-month phase out period. And like hours after they declared that supposedly Anthropic is a threat to national security, they used the very tool that is bad for our national security in the bombing of Tehran. And so, yeah, that’s one layer of what’s happening. But the other thing is, people have been lauding Anthropic a lot for standing their ground. And there’s a deeply complicated aspect to Anthropic role in this whole thing.

So Dario Amodei, the CEO of Anthropic, said that he did not want this current iteration of Claude to be used for autonomous weapons. But in a CBS interview, he said he was perfectly fine on principle with autonomous weapons. It was just not like this version of the technology. And in fact, he had offered to co-create, co-develop autonomous weapons with future iterations of the technology. That’s one thing that complicates this whole thing. The second thing is, I was speaking with Dr. Heidi Khlaaf, who is a chief scientist at AI Now, this policy Research Institute in New York.

And she’s been writing extensively about military and AI. And she mentioned what Ahmed was saying is he’s not OK at the moment with the current iteration of Claude being just like no one popping in and checking, OK, what targets have been identified? But he was OK with Claude actually being a decision support system. And so he was OK with it actually analyzing the data to identify bomb targets. And so the Pentagon is actually using Claude exactly in the way that Ahmed said he was fine with in the current iteration.

And Dr. Heidi Khlaaf was like, if you think that your technology is not good for autonomous weapons, it should also not be used for decision support systems. Because we have extensive research that has shown time and time again that there’s a huge automation bias with humans. When we see a chatbot or a robot do something or say something, we just believe it. And so even if you have a human that’s like popping their head in and being like, OK, have you identified the right bomb targets, they’re like, well, I mean, the bot is a computer and has analyzed all this data, so it must be right. It’s not a legitimate check.

And so what is happening where people are speculating that the school was bombed, indeed, because of an error from Claude is exactly– exactly the kind of scenario that Dr. Heidi Khlaaf was talking about, and exactly the kind of scenario that Amodei was actually OK with.

FLORA LICHTMAN: So this moral high ground for Anthropic feels a little also a little suspect, right? Is that what we’re also getting at? Yeah.

KAREN HAO: And the thing is, I think the way of thinking about Anthropic in the AI world is like the clean coal of AI. They fashion themselves as this ethical company that really cares about safety and the well-being of people and so on and so forth. But the entire way that they develop and deploy their technologies is deeply problematic and very imperial. And so the clean coal, it doesn’t exist. You cannot have clean coal.

FLORA LICHTMAN: This is such a basic question, but the news that I’ve been reading has been describing these as LLM-powered weapons, large language model-powered weapons. And it makes me think like, do I not understand what a large language model is?

KAREN HAO: Yeah. I mean, I personally would not use that phrase because it makes it sound like there’s a chatbot strapped onto a missile. And that’s not quite what’s happening here. I’m sort of piecing it together from what’s been reported by other publications. But what we understand at the high level is that the chatbots or the large language models are being used to analyze information, to identify the bomb targets. And then there is a missile that is launched to target those places.

And it’s not like one continuous sequence. It is people that then receive this list of identified targets and then do what they have always done, which is then launch the weapons. But yeah, it almost feels continuous because of what we were talking about. Because is that person really even actually adding their own judgment in?

FLORA LICHTMAN: And when we talk about autonomous weapons, that’s more like a brain. Because at that point, are we talking about a continuous sequence? Or even if it’s a string of tools together, that amounts to all the judgment being outsourced.

KAREN HAO: Yeah, so fully autonomous weapons is it would be if Claude identifies the targets, and then without anyone there, it’s automatically fed to the missile launching system and then the missiles are launched. Or it could refer to drones with AI capabilities attached to them that go identify the target itself through computer vision system and then drop bombs in that area. So basically, autonomous is defined as specifically like there’s a kill chain sequence. And it’s the last two stages deciding and the launching. If there is no human involved and it’s just the machine that’s doing these, that is what’s considered a fully autonomous weapon.

FLORA LICHTMAN: And this is where Anthropic was like– Amodei was like, we’re not quite ready for that.

KAREN HAO: Yeah, exactly. It was like we’re not quite ready for both steps, but we would like to get there. And we are OK, as long as there’s a person that’s looking while both steps are happening.

FLORA LICHTMAN: Right. Let’s look ahead. What will you be watching for?

KAREN HAO: I’m going to talk about what I’m watching for not with the companies, because the thing that makes me optimistic in a deeply dire time is the amount of resistance that started bubbling up among the public. So this is the thing I’m most excited about watching for is that, in recent polls, 80% of Americans now believe that there needs to be some form of regulation on the AI industry. I don’t remember the last time that 80% of Americans were on the same side of one issue.

I’m very optimistic about the fact that there is now a broad coalition building to hold this industry accountable, because we need that more than ever. And we are already seeing this happening with some aspects of the AI industry, like the reckless data center expansion that the industry has been engaged in, where so many communities across the US are discovering that there is a data center popping up in their community that was struck as a deal under NDA with their city council. And they are literally physically going into the streets to protest these facilities.

They’re going to town halls to pressure their elected leaders. They’re actually voting out their officials that are not adequately reflecting the will of the people in the situation. And this has become a very effective grassroots movement to check a key, what I call pillar of the empire’s expansion. If these companies do not get their data centers at the clip that they need to, they have to slow down their technology development. Because it is already a key bottleneck in their advancement, and it would become an even greater throttle to their advancement.

And I would love to see more people around the US thinking– and also around the world thinking about how to take the lessons from this grassroots movement, pushing back on data centers to then push back on other aspects of the AI supply chain. Whether it’s the reckless deployment in the military or the psychological harm to kids or the mass copyright infringements that are happening. And we are beginning to see more and more of that across the board.

FLORA LICHTMAN: Karen Hao is a journalist covering AI and also the co-host of the new BBC tech podcast, The Interface. Karen, thank you so much for taking the time.

KAREN HAO: Thank you so much for having me.

FLORA LICHTMAN: This episode was produced by Dee Peterschmidt. Thank you for listening. I’m Flora Lichtman.

[MUSIC PLAYING]

Copyright © 2026 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Flora Lichtman

Flora Lichtman is a host of Science Friday. In a previous life, she lived on a research ship where apertivi were served on the top deck, hoisted there via pulley by the ship’s chef.

About Dee Peterschmidt

Dee Peterschmidt is Science Friday’s audio production manager, hosted the podcast Universe of Art, and composes music for Science Friday’s podcasts. Their D&D character is a clumsy bard named Chip Chap Chopman.

Explore More