Columnists

Saturday 11 April 2026

In polarised times, AI may be the centrist the world needs

Studies have shown chatbots nudge people from extreme rightwing views to the middle ground

Pierre Bourdieu, the great French sociologist, used to talk a lot about “social capital”, by which he meant the resources you can access because of who you know and who recognises you. Readers of Jane Austen will be intimately familiar with the idea, though she might have been surprised by the way 21st-century people would be harnessing technology to increase their social capital.

In a networked world, one obvious way to do that is provided by social media. You can become an influencer on Instagram with 20,000 followers, for example (though it’d be best to avoid Dubai for the time being). Likewise, if you’re paranoid about migrants, you can get an account on X and build a following of 5,000 like-minded folks.

In either case, Instagram or X will be happy to help. Their business, after all, is to make money by capturing people’s attention and monetising it. To that end, they’re powered by algorithms that reward and disseminate sensational or inflammatory content, with little regard for truth, because that increases what the platforms prize most: engagement. Not surprisingly, social media attracts “malignant, status-seeking people who use hostility to get attention and power” – and also to increase their social capital. In that sense, spreading hate and divisiveness on social media has, weirdly, become a form of entrepreneurship.

Of course, technology is not the only reason for the increasing polarisation of democracies. The deeper problem is that these societies have for several decades failed to deliver the shared prosperity on which social cohesion depends. As programmers say: inequality is a feature, not a bug of the neoliberal state. The system has been good for some, but countless others have been “left behind”.

Social media gives them a voice that is then amplified by tabloid media. So the pool of people producing and broadcasting information has dramatically expanded – as have the range of views and narratives to which people are exposed. The result is a perfect storm of misinformation.

In 2025, John Burn-Murdoch, the Financial Times’s data wizard, did an analysis that showed extreme political views and narratives are overrepresented on social media compared with traditional media and cable TV. “Whereas traditional media catered to a range of views,” he reported, “with moderate positions well represented, extreme views – of both left and right – are heavily overrepresented on social media.”

Last month, he did a fascinating experiment that built on the earlier study. Using the 2025 dataset of tens of thousands of responses to questions on policy preferences and sociopolitical beliefs, he investigated whether and how the most widely used AI chatbots shape conversations about politics and society.

The results, he writes, “strongly support the theory of AI chatbots as depolarising and technocratising”. Social media elevates fringe views, while AI “nudges people towards the centre”. All the AI platforms Burn-Murdoch tested nudge people away from the most extreme positions and towards more moderate and expert-aligned stances.

Grok guided conversations about policy and society towards the centre-right – a rightward push for most people but a nudge towards the centre for those who started out as conservative hardliners. OpenAI’s ChatGPT, Google’s Gemini and the Chinese model DeepSeek all exerted “similarly sized nudges towards a centre-left worldview”. And even when the bots knew a user’s political leanings, they still directed hardline partisans on both flanks away from extreme beliefs.

The usual caveats apply; this is just one study, it hasn’t been peer-reviewed, and so on. Nevertheless, two things stand out from it.

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

The first is that, while it’s encouraging – given how many people have been alarmed about the persuasive power of large language models (LLMs) for political indoctrination – it’s not entirely surprising. After all, critics of LLMs from the humanities have from the outset scorned the machines for always coming out with “average” responses to literary prompts. Or, as US author Ted Chiang put it, they produce only a “blurry jpeg” of the web.

But second, if we are considering LLMs as a nonpartisan source of information in ideologically contested areas, we need to also take into account the motivations of the corporations that own and operate them, and the datasets on which they have been trained. I’m sure, for example, that the Maga crowd have already trained an AI model on the text of Project 2025, the detailed plan for Donald Trump’s coup d’etat.

So two perennial rules still apply: Gigo (garbage in, garbage out); and caveat lector – reader, always, beware.

What I’m reading

The men who stare at scapegoatsAI Got the Blame for the Iran School Bombing. The Truth Is Far More Worrying is a good long read by Kevin T Baker in the Guardian about how “LLMs gone rogue” stories are misleading and hide the real human culprits.

Spoiled ballot Historian Timothy Snyder’s sobering essay The Next Coup Attempt looks at the plans to hijack the midterm US elections.

Next big thing The Beginning of Programming As We’ll Know It is a nice account of “vibe coding” by a real programmer.

Photograph by Jabin Botsford/the Washington Post via Getty Images

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions