OpenAI has signed a deal with the Pentagon approving the use of its AI technology for classified military systems, just hours after Donald Trump ordered all US federal agencies to blacklist its main competitor, Anthropic, and in advance of US strikes on Iran.
For much of last week a debate over the ethics of using AI technologies for warfare has raged online and in the Pentagon. Last Thursday the chief executive of Anthropic, Dario Amodei, published a blogpost outlining Anthropic’s red lines – the use of its models for domestic mass surveillance or fully autonomous weapons – amid pressure from Pentagon officials.
In response, the president branded Anthropic’s leaders as “leftwing nutjobs” and labelled the company a “supply-chain risk”, meaning that anyone who wants to do business with the Pentagon must cut ties with Anthropic. Amodei has promised to mount a legal challenge in response.

Pete Hegseth
On Friday the CEO of OpenAI, Sam Altman, seemed to express solidarity with Amodei’s red lines, saying on CNBC he “mostly trusted” Anthropic. But hours later, on the eve of the US strikes, OpenAI signed its own deal with the Pentagon approving its technology for “all legal uses”.
In a post on X, Altman claimed: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoD [Department of Defense] agrees with these principles, reflects them in law and policy, and we put them into our agreement.” It is not clear how OpenAI’s deal differs from what Anthropic originally requested.
The decision is the culmination of an escalating and deeply personal battle about who controls the most powerful technology since electricity – and whether the companies that built it will have any say in how it’s deployed. “This is a critical moment and inflection point in how the [DoD] – and the world – is going to regulate AI for military uses,” says Ben Freeman of the Quincy Institute.
For Anthropic, the business implications could be profound, if not existential, stretching far beyond its $200m contract with the DoD. It could stand to lose lucrative partnerships with Palantir, Nvidia, and other companies across the defence supply chain, although there is some debate about how the “supply-chain risk” label will be enforced in practice.
Pete Hegseth is sending a signal that the US is willing to ‘throw out all guardrails when it comes to AI’
Pete Hegseth is sending a signal that the US is willing to ‘throw out all guardrails when it comes to AI’
The Pentagon, meanwhile, will need to untangle deep dependencies on Anthropic’s military products at a moment of acute national security pressure and conflict in the Middle East (one official described it to Axios as “an enormous pain in the ass”).
Until last week, Anthropic’s Claude was the only frontier model operating in the military’s classified systems. Last Monday, the government confirmed that Elon Musk’s xAI had signed an agreement allowing the Pentagon to use its AI model, Grok, in those classified environments.
According to the Wall Street Journal, multiple federal agencies warned the White House of concerns about Grok’s reliability before the deal was signed, saying the model was sycophantic and susceptible to manipulation. Four days later, the OpenAI deal was signed. But integration with the Pentagon’s systems could take months because OpenAI’s tech is not currently available on Amazon’s cloud computing services. OpenAI also signed a $50bn partnership deal with Amazon on Friday. Trump said there would be a “six-month phase-out” for Anthropic’s products.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
Pete Hegseth is sending a signal “that the US is willing to throw out all guardrails when it comes to AI”, said Freeman. “I think this is a really dangerous moment and, at the very least, this is going to set a precedent for the military use of AI going forward.”
Tensions are already spilling out across Silicon Valley, and are likely to intensify given the US military action. Last Thursday 433 employees at OpenAI and Google signed an open letter calling on their leaders to “stand together to continue to refuse the DoD’s current demands”. Military contracts have, in the past, caused trouble for Silicon Valley leaders for Silicon Valley leaders. In 2018, an employee uprising forced Google to abandon Project Maven, a government drone surveillance programme.
At the recent India AI Impact Summit, in what was supposed to be a choreographed “show of unity”, Amodei and Altman appeared to decline to hold hands. Altman later posted he was “confused” during the awkward moment, but most onlookers interpreted it as poorly hidden animosity.
The feud between the AI founders goes back to 2021, when a group of former OpenAI staffers left the company to set up Anthropic, arguing the company was moving too fast, letting commercial pressures outrun its founding mission to build AI that “benefits all humanity”. Anthropic committed to acting “for the global good”, its flagship AI model governed by its own written constitution.
OpenAI, which remains the market leader for non-enterprise AI chatbots, started its life as a not-for-profit research outfit. The company recently restructured as a for-profit, as it hunts for new revenue streams to fund the costly training of its AI models, in what many see as a betrayal of its founding mission. Last month Altman, who in 2024 said he thought “ads plus AI” was “uniquely unsettling”, said the it was testing a new advertising model for its ChatGPT product.
At the start of February, Anthropic took a swipe at OpenAI’s advertising plans with a Super Bowl campaign. One of the ads emphasised “Ads are coming to AI. But not to Claude”, and was widely interpreted as a dig at OpenAI’s plans, an attempt to frame Claude as the safer option.
Now, Anthropic finds itself in a similar bind. Last week, despite its public stand against the Pentagon, Anthropic amended its voluntary safety framework, removing a clause that promised to pause the training or deployment of capable AI models if it could not guarantee adequate safety measures were in place.
Sources close to Anthropic say the change was unrelated to the military talks, with the company’s chief scientist telling Time: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead.”
Industry sources say last week’s events highlight the need for a regulatory body that holds AI companies independently accountable. Without one, companies like Anthropic will continue to set their own terms, and redraw them when pushed. “Politicians do U-turns, not AI companies,” one says, adding that Anthropic has become embroiled in “modern statecraft” because of its sheer size and power.
Photographs by Ludovic Marin/AFP via Getty Images and Andrew Harnik/Pool/AFP via Getty Images



