If 2023 was the year of “AI panic” – triggered by the arrival of ChatGPT in November 2022 – then 2026 looks like it will be the year of acceptance (whether reluctant or gleeful) of AI as the next general-purpose technology, akin to electricity or the internet. If everyday conversation is any guide, ChatGPT has become the synonym for AI, much as Google became synonymous with search in the late 1990s. So, for the time being, the LLMs (large language models) such as GPT-5 , which powers ChatGPT, are where it’s at.
Yeah, but what are these big models, really? Critics of AI describe them derisively as “stochastic parrots” because they build sentences simply by guessing which word (or token) is statistically the most likely to seem relevant. But such derision underestimates their usefulness, which is why the psychologist Alison Gopnik’s idea that LLMs are “cultural technologies” – tools, like books and libraries, for accessing accumulated human knowledge – is more insightful. They should be understood, she says, as innovative systems that enable humans to leverage accumulated human knowledge in transformative ways.
Gopnik’s conception squares with what I see most people using LLMs for: as tools for human augmentation, much as, in a networked age, search engines are prostheses for memory. Given that AI technology seems destined to become as ubiquitous as Googling, isn’t it time that we started thinking of protocols for using them effectively, ethically and honestly?
This thought is reinforced by an intriguing article published in Nature on the problem that using LLMs poses for academics. Suppose, say the authors, that you are a scholar who has come up with a few half-formed ideas for an article and you prompt Claude.ai with some rough notes. The LLM obligingly responds with a coherent draft that fills in the gaps in your thinking with some ideas that another scholar, “Smith”, had published in an article – of which you were unaware – several years ago but that had been included in the LLM’s training data.
You feel convinced that Claude’s text represents a tidied-up version of “your” ideas. The end result, though, is that “your essay now inherits – via nebulous, machine-mediated means – a distinctive insight that Smith developed but for which she receives no credit”.
Related articles:
This illustrates what the authors call “the provenance problem” – “a systematic breakdown in the chain of scholarly acknowledgments that current ethical frameworks fail to address”. If you work in academia, that chain of citations can be critical, because in scholarly audits – such as the Ref (research excellence framework) in the UK, for example –it might determine whether you sink or swim in your career.
The fact that no single identifiable individual “stole” Smith’s work is immaterial: a wrong has nevertheless been done, so there’s no point in trying to name the guilty men, as it were. This is systemic – inherent in the way LLMs are built and operated. Beyond this casualness with intellectual debt, they sometimes “hallucinate” and make stuff up , are deliberately designed to be sycophantic to users and have internalised the biases implicit in the data on which they were trained – plus, at least some of those datasets are a product of copyright infringement.
These are problems that AI giants could solve or at least mitigate if they had any incentive to do so. Given that they don’t, the prudent course is to devise protocols for safe and intelligent use by ordinary users. Here are a few suggestions:
1. Remember, that only the sceptical thrive in this business. Be especially cautious when LLMs generate substantive ideas rather than just polishing your existing arguments. And don’t be fooled by their engineered sycophancy and reluctance to disagree with you.
2. Treat all LLM output, however intriguing, as requiring verification. Become a pestilential scourge continually asking: “What’s your evidence for that?” (Remember that they work for you, not the other way round.)
3. Find out (by experimenting) what they’re good at and use them for that. I find that they’re quite good for tedious web searches, summarising long documents and critically proofreading draft text.
4. Don’t get them to write stuff for you. Give them something you’ve written and ask them to rephrase or tighten it up. The smaller the gap between your input and the LLM’s output, the lower the risk.
5. Be open and transparent about your use of these tools. One of the interesting things about the Nature article I mentioned is the way the authors acknowledged that they had used GPT-5 and Claude Sonnet 4.5 to edit and shorten their text, after which they refined it by hand. This is good practice. And apropos that last point: I asked Claude Sonnet 4.5 to review this draft for clarity. Happy 2026!
What I’m reading
Eye-opener
Neal Stephenson’s A Medical Mystery from Postwar Germany is a great explanation by a great writer of an eyeball enigma – copper wire “whip cracks”.
Critical thinking
Zadie Smith and the Perils of Broad-Mindedness is a thoughtful review of the writer’s essays by Tomiwa Owolade on the Engelsberg Ideas website.
Photograph by Drew Angerer/Getty Images



