Columnists

Friday, 14 November 2025

Amid a mental health crisis, building AI therapists may be good medicine

ChatGPT reported a high instance of users displaying signs of mental illness. If harnessed correctly, that could be to everyone’s benefit

A recent Wired headline read: “OpenAI says hundreds of thousands of ChatGPT users may show signs of manic or psychotic crisis every week.”

At first sight, the headline sounded like tabloid hyperbole. After all, according to OpenAI’s figures, only 0.07% of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania”, while only 0.15% “have conversations that include explicit indicators of potential suicidal planning or intent”. And then the scale registers: with 800 million weekly users, even 0.07% represents hundreds of thousands of people in crisis.

What prompted this disclosure? OpenAI had obviously been monitoring interactions between users and its bot, collecting data on warning signs such as extended chat sessions and perhaps also learning from the uproar after its abrupt (and temporary) withdrawal of GPT-4o, a model valued by users for the level of intimacy they felt it enabled.

But the company was clearly spooked by discovering the signs of mental ill-health among some of its users. It claims to have worked with more than 170 mental health experts to “help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support – reducing responses that fall short of our desired behaviour by 65-80%”.

What does this reveal? At one level, it’s just the latest manifestation of the Eliza effect; the tendency to project human traits – experience, semantic comprehension or empathy – on to responsive machines. More significantly, people resorting to chatbot therapy reveals some uncomfortable truths about industrial society. One is the sheer scale of the mental health crisis that now confronts us. The other is the apparent inability of the healthcare system to give it the priority that it deserves.

A wide-ranging UK inquiry into SMI (severe mental illness) published last week underscores the point. It shows that more than 500,000 of our fellow citizens are living with it. More alarmingly, the research suggests that SMI shortens life expectancy by between 15 and 20 years. In other words, it’s more lethal than diabetes, obesity and even smoking. Mental illness now rivals cancer as the biggest perceived health problem facing the UK but, according to the report, accounts for less than 10% of NHS spending.

Mental illness has always been part of the human condition. But current levels reflect more than better detection systems; they stem from the way societies have evolved. Numerous surveys show steady increases in reported mental health problems from the 1950s to today, especially in industrialised societies.

In recent times, several drivers of the SMI crisis stand out. The first is the way our world was transformed by neoliberal ideas that prioritised corporate interests, amplified inequality, marginalised trade unions, enshittified jobs and increased precarity. The second was the enforced isolation imposed on many vulnerable people by the Covid-19 lockdown, which caused a measurable global spike in mental health problems (especially anxiety and depression). And the third was technology – specifically, heavy social media use, which has been linked to increased anxiety and depression, particularly in adolescents and young adults.

Which brings us to our current predicament: millions of people suffering from mental illness, unable to access medical help or find a therapist they can afford and, understandably, turning to a chatbot that is endlessly patient and attentive – but not much good as a therapist. It would be possible to create a chatbot trained on curated academic and clinical sources that could perform as a more competent therapist than ChatGPT.

In fact, one of the more interesting outcomes of OpenAI’s potential discovery of mental illness among its users was some modifications to the default version of GPT-5 based on clinical advice. The company claims to have “taught the model to better recognise distress, de-escalate conversations and guide people toward professional care when appropriate. We’ve also expanded access to crisis hotlines, rerouted sensitive conversations originating from other models to safer models, and added gentle reminders to take breaks during long sessions

It won’t be perfect, but it’s a start. And maybe it could inspire a good psychiatric hospital to build a really great large language model that would be truly therapeutic. Maudsley, I’m looking at you!

Contact mental health charity Mind on 0300 123 3393 or visit mind.org.uk

What I’m reading

Voluntary contribution

This Is Public Life in 2025, Who Would Volunteer? is a sobering Substack reflection on the BBC crisis and beyond by Martha Lane Fox.

Forward looking

An interesting report by Steve Newman from a gathering of the saner members of the tech elite in San Francisco is What I Saw Around the Curve.

Keeping track

AI Could Be the Railroad of the 21st Century. Brace Yourself is a nice long read on the AI bubble by Derek Thompson.

Photograph by Getty Images

Share this article

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions