Last week, the sedate world of book reviewing was rocked by a startling revelation. The New York Times announced that the British literary critic Alex Preston had admitted to using AI that plagiarised another writer’s review of a novel to complete his own critique of the same book.
The infraction by one freelance reviewer may seem inconsequential in the grand scheme of global affairs, but it does speak to a wider and more troubling issue that is shaking the foundations of the media and creative industries: the proliferation of AI and uncertainty about how it should be used.
Of particular concern are large language models (LLMs), a form of AI that has been trained on colossal data sets to understand, summarise and generate human language.
In January, Preston asked an LLM tool to enlarge a draft of a review he had written of Jean-Baptiste Andrea’s Watching Over Her. According to his account, he did not realise that, in the process, the program took a few lines almost verbatim from Christobel Kent’s Guardian review of the book, first published in August.
When an alert reader drew the New York Times’s attention to the striking similarities, Preston confessed to his use of AI, and the newspaper ended its relationship with him for violating its standards.
Online responses among his fellow reviewers ranged from surprise (Preston is a respected critic who has also written for The Observer, the Guardian, the Telegraph, the New Statesman and GQ) and schadenfreude (book reviewing is filled with many talented but poorly paid writers) to sympathy (there but for the grace of God...). The conviction that LLMs are “plagiarism machines” was widely expressed.
The critic immediately acknowledged that he was guilty of a “complete failure of judgment” and apologised to all concerned parties, but he also explained that he had clumsily used a tool he “didn’t understand”.
Some will suggest that new technology confusion is a handy cover for old-fashioned literary plagiarism, but Angus Finney, a fellow at the Cambridge Judge Business School, believes that stories such as Preston’s will become increasingly common in the AI era.
Part of the problem is that different media have different needs and guidelines. Some high-content turnover publications are keen for reporters to employ AI in the writing process to increase efficiency. Others that place a premium on the voice and authority of the writer abhor any trace of artificial input.
In Britain, the Society of Authors has just launched a “human authored” scheme that members can sign up to in order to attest to the genuineness of their work.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
Finney believes that there is great ignorance of generative AI’s many pitfalls among its users. He has written a book on the subject, Lord of the Lies: How to Think, Learn and Thrive in the Age of AI, which will be published later this year.
“The mimicry and speed and fluency of these LLMs is very seductive,” he said. “They seem like they are offering you the world at your fingertips, but if you don’t double- and triple-check, they’ll bite your fingers off.”
If nothing else, rigorous factchecking may at least disguise otherwise egregious use of AI. Last month saw a series of publishing controversies that point towards the dramatic incursions made by AI into the writing process.
The former academic and defeated Reform UK candidate Matt Goodwin was branded “MattGPT” after he was accused of using nonexistent quotes and dubious statistics in his new self-published book Suicide of a Nation: Immigration, Islam, Identity.
Finney talks about “hallucinated facts”: the strange tendency AI has to produce false information. “Basically, they make things up because these LLMs are programmed to give answers. They have to finish the task. If they can’t see a route that’s empirical and factual, they will immediately go to a plausible answer rather than a correct or verified answer.”
About 10% to 20% of search engine work used in the media and creative industries, he said, “is effectively going to be wrong if you’re depending on an LLM”.
Peter Vandermeersch, the former chief executive of Mediahuis Ireland, publisher of the Irish Independent, was suspended in March by the European group, where he held the august title of fellow of “journalism and society”, after a Dutch newspaper accused him of using AI to produce “dozens” of false quotes. He admitted his guilt and eight of his articles were removed from the Independent.ie website.
Also last month, the Hachette Book Group removed Shy Girl, a horror novel, from its forthcoming American roster after allegations that its author, Mia Ballard, had relied largely on AI to help her write the book, an accusation she denies. In addition, the novel was discontinued in the UK, where it was already on sale.
LLMs require a vast amount of written material to produce rapidly improving human-like sentences: this written material has been swallowed up by the companies generating AI text without any acknowledgment of copyright. So writers are both the unwitting victims and corner-cutting beneficiaries – if that is the right word – of these advances.
Last Monday was the final deadline for authors to make claims against the US tech company Anthropic for copyright infringement compensation.
Last September the company agreed a $1.5bn settlement with authors in relation to an estimated 500,000 books it had pirated to train its Claude AI model. It sounds like a huge sum, until you realise that Anthropic was valued at $380bn after a recent funding round, and authors will receive compensation of about $3,000 a book.
Against that industrial-sized intellectual larceny, the kinds of petty plagiarisms and false attributions that understandably upset fellow writers, not to mention readers, are minor misdemeanours. Yet it was that infringement of writers’ copyright in the first place that enabled the later transgressions.
It is the now well-established modern entrepreneurial principle that it is better to ask forgiveness than permission, even if the kind of embarrassed contrition shown by Preston is no more a trait of big tech than it is of the bombastic Goodwin.
Finney believes that, for all the momentum of the LLMs, there remains a “technical glass ceiling” that stops short of perfect human imitation. No matter how refined their pattern recognition becomes, they still produce generic material, bereft of the individuality that is the hallmark of true creativity.
Where AI is already having its most profound and potentially damaging effect is on the writers and creators of tomorrow, who are coming through an education system bedevilled by the bad habits that LLMs have engendered.
“The problem starts at schools and universities,” said Finney. He argues that teachers and tutors themselves have become too dependent on AI to set work and tests, and this in turn encourages students to seek short cuts. The temptation to use an LLM to answer an essay question, and then afterwards check for mistakes, is one that many procrastinating students find hard to resist as deadlines approach; a temptation, of course, also faced by Preston.
What is lost in that action, according to Finney, is a knowledge basis and creative thinking. Also abandoned is the opportunity to produce distinctive prose rather than “the flattened mediocre language” in which AI trades.
To add insult to injury, AI-detection methods that are themselves AI often misdiagnose the use of AI, which, according to the US publication Chronicle of Higher Education, has led students to use AI to check for AI flags in their own work. “Universities and schools and education are going to hit a huge problem in terms of real knowledge, real critical analysis, real smarts,” said Finney.
Perhaps the answer is to recognise that AI is a tool, not a solution. Its instantaneous convenience and companionable tone are designed to encourage dependency.
Plagiarism is a furtive act that hides in plain sight. There is an old line that says: “Good artists copy, great artists steal.” But although the words are often attributed to Picasso, there is no evidence he said them. Even the clever take on plagiarism was borrowed from somewhere else.
The key thing about artistic appropriation, whether it is a homage or a heist, is that it is conscious. With AI, that is often far from clear. And if a seasoned critic can risk his reputation by employing its pre-owned phrases, then few of us can feel entirely safe in its treacherous embrace.
Photograph by David Levenson/Getty Images



