The past few weeks have been good for AI doom mongers. Mrinank Sharma, head of safety research at the AI pioneer Anthropic, which makes the Claude chatbot, quit the company, warning that “the world is in peril”. Zoë Hitzig, a researcher at OpenAI, makers of ChatGPT, also quit, fearing, as she wrote in the New York Times, its “incentives to surveil, profile and manipulate its users”.
Anthropic's own safety report from last year, claiming that Claude had resorted to blackmail to prevent it being unplugged, started doing the rounds again. A post by tech entrepreneur Matt Shumer, entitled “Something big is happening”, suggested that an estimate by Anthropic CEO Dario Amodei that “AI will eliminate 50% of entry-level white-collar jobs within one to five years” might be “conservative”. It went viral, with more than 84m views, and created a huge social media storm.
The arrival of new technologies – from aeroplanes to nuclear power to cloning – has often been accompanied by warnings of doom. What makes AI seem different is that many view it not merely as a technology but potentially as an agent, a silicon-based being that may act in ways disastrous to humans.
Such fears are at the heart of the furore over the capacity of AI to blackmail, a story that led to headlines such as “Anthropic Study Finds AI Model ‘Turned Evil’ After Hacking Its Own Training”.
The truth is more mundane. How any AI system behaves depends on how it is programmed and prompted and what data it possesses. Anthropic set Claude a goal, then blocked all ethical ways of achieving it, leaving “blackmail” the only option. As its own report acknowledged, “We deliberately… presented models with no other way to achieve their goals”. Researchers determined a particular outcome from the start and then acted surprised when the machine “chose” that outcome.
A critical paper from the UK AI Security Institute compared such research to early investigations into the linguistic capacities of chimpanzees, researchers in both cases imputing “beliefs and desires to non-human agents… when they act in ways that superficially resemble people”. It chided Anthropic researchers for having “conveniently encouraged the model to produce the unethical behaviour”.
The idea of AI as an existential menace to humanity is not simply overblown, it also hides the real threat AI poses, not in the future but in the present – the result of actions not of machines but of humans.
AI is likely to be at least as transformative as the discovery of electricity
AI is likely to be at least as transformative as the discovery of electricity
One is the question of jobs, though here, too, the dangers can be overstated. The coming of agentic AI – models that mimic human decision-making by working towards specific goals with minimal prompting – has led to a panic about imminent mass job losses. Fears that automation will replace humans have been ever-present since the industrial revolution. So, “Why Are There Still So Many Jobs?” asked one academic survey of the history of automation. The answer, it concluded, is that machines require humans to “complement” them and that automation helps create new kinds of jobs.
What is often ignored by the fear mongers is that, while AI can be hugely useful, it can also be alarmingly bad. AI “hallucinates” – invents facts – and has little understanding of social context.
The AI industry itself illustrates this. Anthropic’s Boris Cherny claimed last month that “Pretty much 100% of our code is written by Claude Code + Opus 4.5 [the company’s leading AI models]”. The trouble is, AI is quick but the code it creates can be bloated and buggy. A survey last year showed that the time taken to correct AI code was almost as great as that saved by getting AI to write it. Another study in January similarly found that most developers thought AI helped them code faster, but 96% did not trust such code to be correct. Yet fewer than half of developers always checked AI-created code.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
Similarly with AI research, where in many fields its increasing use creates new problems. The use of AI by lawyers and judges to help build legal arguments has resulted in fake quotes and citations, and rebukes from judges.
None of this is to say that AI is not useful, even transformative, but the idea that it will simply “replace humans” is a dangerous fantasy. Employers will inevitably attempt to use AI to save costs, cut the labour force and subject workers to more degrading conditions. This is not an issue with AI but with an economic system that constantly debases workers.
A recent study found that AI “didn’t reduce work” but “intensified it”, forcing employees to work “at a faster pace” and for longer. The answer is not to fear AI but to ensure we defend workers’ rights and conditions against exploitative employers.
Perhaps the most pernicious use of AI is for the creation of a surveillance state. Liberal democracies, as much as authoritarian states, now exploit techniques from facial recognition to workplace surveillance , from governments tracking social media to “predictive policing”. What is being created is a global digital panopticon, a contemporary version of what Michel Foucault, half a century ago, called “permanent, exhaustive, omnipresent surveillance, capable of making all visible, as long as it could itself remain invisible”. This is what we need to confront not fantasies about malicious AI but the reality of increasingly dystopian societies.
AI is a tool that is likely to be at least as transformative as the discovery of electricity or the creation of the silicon chip. But, like all tools, how it is used, and for what purpose, depends on humans, not machines. The real danger is that we don’t recognise that.
Photograph by Qilai Shen/Bloomberg via Getty Images



