Columnists

Friday 20 March 2026

Claude’s won a battle with ChatGPT, but network effects will always win the war

While users are deserting OpenAI’s chatbot in their millions over its US military policy, X proves the lasting power of platforms, however toxic they might be

In November 2024, the AI maker Anthropic struck a deal with Palantir and Amazon Web Services to deploy its Claude chatbot on US classified military and intelligence networks. It seemed a big deal at the time: getting an AI model cleared for classified work at the second-highest security level was something no other AI company had accomplished. In July 2025, the US defence department awarded Anthropic a $200m contract to “prototype frontier AI capabilities that advance US national security”.

What nobody seemed to have noticed at the time was Anthropic had stipulated that its technology could be used “for any lawful purpose” except for two things: mass surveillance of American citizens; and deployment of fully autonomous weapons without human oversight.

At some point, these two red lines really began to grate on military bureaucrats and triggered an explosion from Pete Hegseth, the slick-haired defence secretary, who initially fumed at the temerity of a mere corporation dictating what the mighty US military could or could not do before going fully apoplectic and declaring Anthropic to be a supply-chain risk, just like the Chinese tech giant Huawei and other outfits run by foreign adversaries.

Enter Sam Altman, chief executive of ChatGPT maker OpenAI and a guy so slippery that he gives grease a bad name. His spiel was that he was content to allow the Pentagon, which rebranded last September as the Department of War (DoW), to define what counts as lawful “mass surveillance” and “autonomous weapons”. OpenAI, he declared, “will build technical safeguards to ensure our models behave as they should, which the DoW also wanted”. And maybe it would also have access to all that juicy classified data.

Unsurprisingly, this spineless intervention – and the opportunistic cant that accompanied it – kicked up a bit of a storm. US uninstalls of the ChatGPT app suddenly soared. On the Apple App Store in the US, Anthropic’s Claude, which was languishing in 42nd place early last month, leaped to No 1, overturning ChatGPT for the first time. By the final week of February, daily sign-ups to Claude were breaking records: free users were up by more than 60% and paid subscribers doubled.

An energetic campaigning group, QuitGPT, set up a website that claims to have more than 4 million supporters. “ChatGPT takes Trump’s killer robot deal,” it declares. “It’s time to quit.”

This furore has had some impact on Altman. On 3 March, for example, he admitted that his deal with the DoW looked “opportunistic and sloppy” and that he “shouldn’t have rushed” to get it out. He also said that OpenAI would amend the Pentagon contract to include explicit language barring domestic surveillance of US persons and nationals, and prohibiting use by the National Security Agency. In other words, the very things for which Anthropic has been punished. It’s hard to imagine Hegseth buying such “woke” propositions.

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

I hate to say this, but the idea that user boycotts may provide an effective way of disciplining tech corporations still smacks of magical thinking – the belief that symbolic gestures can change material outcomes. That’s partly because corporations are what the sci-fi writer Charlie Stross calls “slow AIs” – vast sociopathic machines driven by one overriding imperative: to maximise shareholder value. And they are really only moved by threats that may genuinely undermine their ability to achieve that goal.

Of course, a mass exodus of customers or users could provide that kind of shock. But tech firms operate on such a scale that it would take a colossal number of defections to have the desired effect. It’s very impressive that QuitGPT has more than 4 million supporters; but ChatGPT has something like 800 million weekly active users

For most users of digital technology, the biggest disincentive to swap sides is always the power of network effects. Leaving a social platform or even an AI tool imposes costs on the leaver (lost connections, for example, or disrupted workflows), while the platform barely notices any individual departure.

If you want an example, just ponder the vast numbers of organisations (and apparently respectable people) that are still on X (nee Twitter), despite the fact it has become a toxic sewer. And when you ask them why they’re still using it, the resulting resigned or embarrassed shrug vividly testifies to the power of network effects. The same thing applies to Instagram or TikTok. And to the hundreds of thousands of people who use ChatGPT as an emotional crutch, therapist, virtual companion or search engine.

For them, leaving the platform would be a wrench. As for me, since I never used it much, deleting it was easy. After all, other – less ethically challenged – AIs are still available.

What I’m reading

Ex machina

Margaret Atwood had a long conversation with an AI in Claude, You’re a Cutie-Pie!. You can guess how it went. Here’s the transcript.

War footing

How Not to Do Regime Change is Francis Fukuyama writing on the idiocy of the Iran war.

Farms and the man

A really striking essay on the economic effects of the Middle East conflict by Paul Podolsky is AI Is Today’s Tractor.

Photograph by Jabin Botsford/The Washington Post via Getty Images

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions