Technology

Friday 6 March 2026

The Trump administration thinks Anthropic is a ‘supply chain risk’. It is still using its AI tool

Claude reportedly helped the US military bomb Iran

This article appeared as part of the Daily Sensemaker newsletter – one story a day to make sense of the world. To receive it in your inbox, featuring content exclusive to the newsletter, sign up for free here.

Last week the Trump administration declared Anthropic a “supply chain risk”, which effectively bans companies with ties to the AI firm from working with the Pentagon. Since then, the American military has reportedly used its Claude tool to bomb Iran. The US struck more than 1,000 targets in the first 24 hours of the conflict, using the Maven Smart System to identify targets and ascertain coordinates by sifting through reams of intelligence data. Claude is built into this technology and seems to have become integral to US military planning. The Washington Post says that the tool also helped organise the sequence of the bombings. Anthropic, meanwhile, is trying to restart its relationship with the Pentagon. Analysts have speculated for years about how AI would be deployed in warfare. The messy future seems to have arrived.

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions