Technology

Tuesday 10 March 2026

AI is inventing academic articles – and scholars are citing them

From fake footnotes to phantom studies, AI-generated citations are slipping into real academic publishing. Scholars and publishers fear this ‘scholarly slop’ is polluting truth and science

Towards the end of last year, Ben Williamson was reading a promising paper submitted to his academic journal when he noticed his own name in the footnotes. It was attached to a study called Education, Governance and Datafication.It sounded plausible enough. The paper was about his area of expertise, digital education; the co-authors listed were people he had genuinely collaborated with; and it appeared to be published by one of academia’s most respected publishers.

But Williamson knew immediately that it was fake. As a journal editor, he was already familiar with a growing problem: AI chatbots that “hallucinate” references to papers that do not exist. What he discovered next, however, was stranger.

When Williamson searched for the phantom article online, he says that it had “taken on a life of its own”. On Google Scholar, the non-existent paper had been cited over 70 times in other journals – and even one book. The day we speak, he has just received a text message from a fellow academic: a student has cited the study in a paper about the use ofAI in education– “an unfortunate twist at the end of this story,” he says.

There is a name for this phenomenon: scholarly slop, and it refers not just to AI-hallucinated citations, but to their ability to multiply. Each time a made-up paper slips through the cracks and ends up cited in a real journal, it is likely to be cited again and again, acquiring the appearance of legitimacy with each new reference.

The academic publishers Elsevier, Taylor and Francis and Springer Nature all confirmed to The Observer that scholarly slop is real – and growing “at scale”, according to an Elsevier spokesperson. None were able to share their data, but there are signs the problem is prevalent.

An analysis of NeurIPS 2025, a prestigious AI conference, found at least 100 hallucinated citations across 53 accepted papers. A study of another three academic conferences in 2024-25 identified nearly 300 papers containing fabricated references. And a late 2024 analysis of 1.5 million abstracts on the medical database PubMed found roughly 14% appeared to contain AI-generated material.

Scholarly slop, publishers and academics say, is putting pressure on an overburdened publishing ecosystem, with consequences that reach far beyond academia. “Somehow [I am] named within this torrent of material that poses real threats in terms of polluting our … knowledge systems,” Williamson says. “I do feel a deep sense of discomfort about that.”

*

False facts have circulated for centuries, and it would be unfair to blame AI alone for their proliferation. In 1754, the English antiquary William Stukeley wrote that the Great Wall of China “makes a considerable figure … and may be discerned at the Moon”. The claim, made almost 200 years before space flight, appears to be the origin of the myth that the Great Wall can be seen from space. Repopularised in the 19th century, the idea was repeated in travel writing and cartoons, and Neil Armstrong's admission in 1969 that he couldn’t see the wall from the lunar surface did little to stem its spread. For decades, Chinese textbooks continued to teach that the wall was visible from 400km above the Earth, some even falsely quoting Armstrong as having confirmed it.

Today, “the problem is not only that there's a huge amount of invention being done, but that a lot of it is done online,” says Anthony Grafton, a history professor who wrote The Footnote, about citations. The advent of the internet has made it easier than ever for invented information to reach new audiences, but harder than ever to verify.

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

In 2010, two Brazilian lawyers created a Wikipedia page for a made-up legal scholar called Carlos Bandeirense Mirandópolis, to prank an intern who had been relying too heavily on Wikipedia for his legal research. The page remained online for years, and its creators had all but forgotten about it when they heard that it had been cited in a 2014 judicial decision by the Rio de Janeiro Court of Justice, as well as a documentary and an undergraduate thesis. (“I can't believe that all those people – even the judiciary – needed to have the same lesson as our intern,” Victor Nóbrega, one of the lawyers, says. “It was very entertaining for us.”).

False facts, often created by anonymous editors on Wikipedia, have made it into books, films, and newspapers, which are then cited back on Wikipedia itself, creating a false information loop known as citogenesis. Grafton says information generated online is harder to fact-check, because the internet doesn’t keep a permanent record.

AI chatbots have further loosened our relationship with the truth. Trained on vast quantities of online text, they are designed to produce fluent answers even when they do not know the facts, and, often, they do so by inventing plausible references. The technology also makes it easier to draft papers quickly, translate work into English, and to submit it all at scale.

NeurIPS, the conference, received more than 21,500 submissions in 2025, a 61% increase on the previous year. Many other conferences and journals have reported an increase in volume since 2023. Williamson’s own journal has seen submissions double to 1,200 in a year, and many of them contain false information. “We’re having to ask peer reviewers to be much more vigilant about checking citations and verifying references,” Williamson says. “It’s putting enormous pressure on editors and reviewers to maintain the integrity of journals – and of the scholarly knowledge environment more broadly.”

The pressure on journals reflects deeper problems in academia that long predate AI. Researchers operate in a “publish or perish” culture, Williamson says, often on short-term contracts and under pressure to produce papers quickly in order to secure funding. In that environment, an underground industry of so-called “paper mills” has already flourished, selling fabricated or low-quality research to academics looking to bulk out their publication records. AI is compounding the problem.

The publishers who spoke to The Observer said they are investing in fraud detection technology and in-house verification specialists to tackle scholarly slop.

“We are in an arms race”, says Van Rossum. He argues publishers, research institutions and technology companies should collaborate to protect the integrity of the scholarly record and build a new verification ecosystem.

For now though, “we are drowning in a sea of slop”, says Grafton. “The thing that worries me is that we're now in a world where everybody wants to invent history to say what they think it does.”

“The footnote, in the end, is the only weapon you have against that,” Grafton says.

Photograph by Yang Dong/VCG via Getty Images

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions