The difference between genius and stupidity, observed Albert Einstein, “is that genius has its limits”. Cue the vapourings about AI from some of Silicon Valley’s prodigies. Think of Sam Altman of OpenAI, for example, burbling that the technology will bring “unimaginable prosperity”, help to fix the climate crisis and even “figure out how to cure cancer”. And, of course, it will make work optional, as Elon Musk explained to then UK prime minister Rishi Sunak in a toe-curling interview at the AI safety summit held at Bletchley Park in 2023.
How do smart people come to talk such nonsense? It’s mostly because spectacular success in their chosen, limited, domains makes them overconfident. After all, they’ve built huge companies and solved hard problems and assume that all problems are presumably similar to ones they’ve solved before. They live in a world where intelligence and information processing are what make things happen and assume that those are the key variables everywhere.
Alas, they’re not. The problem we have in addressing the climate crisis is not shortage of ideas – or a lack of real (or artificial) intelligence – but that we live in a political system driven by five-year electoral cycles that make it impossible to take the necessary remedial measures in time. And the reason that AI will take quite a while to have a serious influence on productivity and economic growth is not because managers aren’t comfortable with Claude Code or Gemini, but because elaborate industrial processes have to be transformed to make full use of AI – and that takes ages.
Even in areas where AI has remarkable utility, physical and organisational realities have an awkward habit of intruding. Take drug discovery, a domain where the technology has made astonishing advances. Think, for example, of the way that DeepMind’s AlphaFold program cracked a 50-year-old problem in molecular biology – predicting how proteins will fold. This AI is, as one scientific journal puts it, now “reshaping the landscape of drug design”.
Since pharmaceutical research is basically about finding molecules that may have medical applications, and AIs such as AlphaFold can dramatically speed up that discovery process, this looks like a win-win for the technology. At any rate, that’s how the tech industry sees it. In a recent interview, for example, Dario Amodei, the co-founder of Anthropic, predicted that AI will compress the clinical trials needed for drug authorisation to roughly one year.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
This is, to put it politely, unlikely. First, as one critic pointed out, it conflates two separate variables: the success rate of a trial (will the drug work?) and the speed of a trial (how long does it take to run?). “AI models can help design more elegant molecules, in the same way an architect can use AI to design more efficient floor plans, but neither intervention guarantees an efficient use of institutional machinery to make that design in the real world.
“Even the most promising drug candidates must be tested in human bodies, which, in turn, need time to metabolise those drugs and develop side-effects. Patients must be recruited and followed over time, and regulators must be satisfied. None of this is easily accelerated with AI.”
Meanwhile, over in a different part of the intellectual forest, scientists are struggling to absorb the implications of another DeepMind success: the fact that machines now play chess and Go better than any human, and that their triumph was accelerated by removing humans from the problem-solving: “the bitter lesson” that computer scientist Richard Sutton outlined in his 2019 essay of the same title.
Seventy years of research in AI, Sutton concluded, suggest systems that fully relied on pure computation have outperformed those designed to reflect human intuition. Or, to put it crudely: “AI scientists will likely do better in the future if humans are not involved in their operation.”
Which leaves us with an intriguing tension. On the one hand, it’s clear that AI boosters radically underestimate physical and organisational realities; the fact that drug trials require careful recruitment and monitoring of human subjects, that bodies take time to metabolise drugs, that electoral cycles make it impossible to do long-term planning and so on.
All of which is in effect a humanist argument that the world is messier than engineers think – or can imagine. But then we have Sutton suggesting that removing human intuition from AI systems actually improves them. AI cannot dissolve the friction of the physical world – but it may dissolve something we valued more, which is our own indispensability to the process of discovery itself.
What I’m reading
Claude of war
AI vs the Pentagon is a terrific blogpost by Jasmine Sun on the row between Donald Trump’s gang and Anthropic over militarising its Claude AI.
Munich disagreement
Arthur Goldhammer’s Inventing Tradition is delicious demolition of Marco Rubio’s speech at the recent Munich security conference.
Crouching tiger
The Ideological Implications of China’s Economic Success is an insightful analysis by the economist Branko Milanovic.
Photograph by Tolga Akmen/EPA/Bloomberg via Getty Images



