The Pentagon is reportedly “close” to cutting ties with Anthropic after the tech giant prohibited use of its Claude AI model for some military purposes.
Designating Anthropic a “supply-chain risk” would mean that anyone who wants to do business with the Pentagon must first cut ties with the AI-maker. The dispute comes days after Anthropic closed a $30bn (£22bn) round at a $380bn valuation.
If the Pentagon stands firm, “Anthropic or like-minded companies could stand to lose out on capital and influence to peers who are willing to abide by the policy,” says Owen Daniels of the Center for Security and Emerging Technology.
In July 2025, Anthropic was awarded a $200m contract with the Department of War, which tasked it with developing “prototype frontier AI capabilities” for both enterprise and military domains. Crucially, Claude is the only frontier AI model with access to the military’s classified system.
The Wall Street Journal reported that Claude was used – via the tech company Palantir’s platform – during the US military operation that seized former Venezuelan president Nicolás Maduro. In the aftermath, a senior Anthropic employee asked a Palantir executive whether Claude had been involved in the capture. The Palantir executive relayed the exchange to the Pentagon, where officials bristled.
There’s no industry consensus to fall back on here
There’s no industry consensus to fall back on here
Owen Daniels, Center for Security and Emerging Technology
Anthropic’s contract with the Pentagon authorises its systems to be deployed for “all legal uses”, but Anthropic wants there to be certain caveats in line with its usage policy, which bars use “for criminal justice, censorship, surveillance, or prohibited law enforcement purposes”.
Daniels says Anthropic’s position is “especially challenging” because its big tech competitors, which include OpenAI, Google and xAI, have agreed for their systems to be deployed for “all legal uses” too. “There’s no industry consensus to fall back on here,” he adds.
Founded by former OpenAI employees in 2021, Anthropic has argued that powerful models require rigorous checks and limits on certain uses, even at the cost of commercial advantage. Today, the company still promises to “act for the global good”, and its flagship chatbot even has its own constitution.
The marriage between the White House and the tech giant has been unhappy for some time. In 2024, Anthropic co-founder Dario Amodei called Trump a “feudal warlord” and endorsed Kamala Harris. Its policy and national security operations are dominated by former Joe Biden-administration officials, though the company says it hires across party lines.
In October, David Sacks, Donald Trump’s AI and crypto czar, said Anthropic was “running a sophisticated regulatory capture strategy based on fear-mongering” and has an “agenda to backdoor Woke AI”. Daniels says that framing “risks politicising the company’s legitimate concerns about AI safety and alignment”.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
Photograph by Alex Wroblewski/AFP via Getty Images



