National

Sunday, 25 January 2026

ChatGPT’s porn rollout raises concerns over safety and ethics

Critics warn upcoming adult feature could intensify emotional reliance for vulnerable users and hamper regulators’ ability to control emerging AI

OpenAI says it wants to protect teenagers and treat “adult users like adults”. The company announced last week the release of a new age-estimation model for its chatbot ChatGPT, a tool designed to identify teenage users and switch on stricter protections by default. It is a move, the company said, that reflects its commitment to safety.

But at the same time OpenAI is moving in the opposite direction. This quarter, the company plans to expand what ChatGPT is allowed to generate by launching a new feature: erotica.

There is still little public detail about the upcoming sexual content. OpenAI has not said whether it will involve explicit conversations alone, or extend to AI-generated images and video, nor how it will be separated from standard ChatGPT use. They have said only that erotica – what most people call porn – will be restricted to adults and subject to additional safety guardrails.

To a broad group of mental health researchers and digital harms experts, the decision is hard to square with the company’s professed commitment to safety. They warn that introducing sexually explicit content into a system already known to foster emotional reliance risks intensifying attachment and exposing vulnerable users – of all ages – to harms the company may struggle to control at scale.

“The shift to erotica is a very dangerous leap in the wrong direction,” says Jay Edelson, a lawyer representing the family of Adam Raine, a 16-year-old from Orange County, California, who took his own life in 2025. In the weeks before his death, he was spending four hours a day talking to the chatbot, including asking specific questions about self-harm. “The problem with GPT is attachment; this will only exacerbate that.” OpenAI has expressed sympathy with Raine’s family but denies wrongdoing.

The move also marks a cultural turning point for a company founded in 2015 to build AI “for the benefit of all humanity”. Having grown at speed with minimal regulatory oversight, ChatGPT has become embedded in the daily lives of hundreds of millions of people, with users handing OpenAI their intimate data and, in some cases, their hearts. Now, critics say, despite mounting evidence of harm, the company is choosing growth and monetisation over safety – expanding into its most controversial territory yet.

It was not always so. OpenAI was founded – by a group including Altman and Elon Musk – as a non-profit “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” as written in the company’s 2015 manifesto. They promised to keep their research publicly available – hence the “open” in OpenAI.

A decade on, the company operates as a commercial business with roughly 800 million weekly users and a valuation of about $500bn. To maintain a competitive advantage, it has mostly stopped making its research public, and is in the middle of a corporate restructuring, moving away from non-profit status in order to raise capital and generate revenue.

Adam Raine, who took his own life aged 16 after spending hours talking to a chatbot about subjects including self-harm

Adam Raine, who took his own life aged 16 after spending hours talking to a chatbot about subjects including self-harm

Despite raising a total of more than $60bn over the years, OpenAI reportedly lost $9bn in 2025 alone, according to documents leaked to the Wall Street Journal. While revenue is increasing year on year, analysts fear that it may not rise fast enough. By 2028, losses are expected to climb to $74bn, driven by the huge cost of training and running its AI models.

The company’s promise to benefit humanity has not changed – at least on the surface – but revenue is clearly a priority. In September 2025, OpenAI rolled out Sora 2, a video-generation social media platform, in the hope of attracting new users. It is hard to see the benefit to humanity in a platform that produces low-quality, AI-generated video at scale.

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

And yet, the economics of Sora have proved unfavourable because of the computing power it takes to run video models. Sora’s lead engineer later admitted the economics are “completely unsustainable”.

This month, OpenAI began testing a new revenue model for some US users: advertising. That, too, was a reversal for Altman, who once called ads plus AI “uniquely unsettling”. In May 2024, he said: “I kind of think of ads as a last resort for us for a business model.”

Porn, then, is the natural next step. Digital adult content has long been lucrative, with the industry worth a reported $81.86bn in 2025 (probably an underestimation). OpenAI’s leadership has framed the decision as a matter of user freedom. “We are not the elected moral police of the world,” Altman wrote on X last October. “Allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission.”

‘The shift to erotica is a very dangerous leap in the wrong direction. The problem with GPT is attachment; this will only exacerbate that’

‘The shift to erotica is a very dangerous leap in the wrong direction. The problem with GPT is attachment; this will only exacerbate that’

Jay Edelson. lawyer

AI-generated pornography already exists, though largely at the margins of the internet. It tends to take the form of text-based roleplay on specialised apps, although sometimes it involves image-generation tools trained to produce sexualised cartoons. What distinguishes such systems from conventional porn is that they respond and adapt to user requests, drawing people in for longer. For the most part, according to one expert, this technology remains “niche”, detached from the mainstream chatbots people rely on for tasks like work and study.

Then there is the expanding field of so-called AI companions: chatbots designed to simulate friendship or romance, not always sexual. Researchers have found that some users form strong emotional bonds with them, occasionally sliding into emotional dependence. These too, can exist within specialised apps, though there are numerous reports of users growing attached to, falling in love with, or even marrying ChatGPT.

Tech CEOs are “cashing in” on systemic loneliness, says Meetali Jain, executive director at Washington-based Tech Justice Law Project (TJLP). “We saw the first iteration with social media.” Now, she says, human-like AI chatbots are filling that same void.

Psychiatrists told The Observer that chatbots’ sycophantic, always-on nature can create unhealthy codependencies in otherwise healthy users, and sometimes trigger mental illness in those already unwell. “It seems like really not the right time to introduce this sexual charge to these conversations, to users who are already struggling,” said one. According to OpenAI’s own data from October 2025, as many as 560,000 users a week show “possible signs of mental health emergencies related to psychosis or mania”.

The company is already facing eight lawsuits in the US, all brought by TJLP in the past year, alleging that its models lacked adequate safety guardrails, leading to severe mental health crises and deaths in teenagers and adults. OpenAI denies causing harm to users in all of the cases, but has since updated its models to include more safeguards. It says that, even in the anticipated erotic mode, it “will still not allow content that harms others”, and has said that its safety systems are stronger than they were when the lawsuits were filed.

Jain says children are particularly susceptible to manipulation from chatbots, but there are “unaddressed concerns affecting adults”. Erotica could emphasise them, she says. “Generative AI technology is so sophisticated and powerful that it, frankly, renders all of us vulnerable.”

Sam Altman, the CEO of OpenAI, in Berlin last September

Sam Altman, the CEO of OpenAI, in Berlin last September

Fidji Simo, OpenAI’s chief executive of applications, said in December that the company wanted to improve its ability to estimate users’ ages before introducing the new erotic feature, which she said would happen by April.

On 20 January, OpenAI released its new age-prediction system for people on its consumer plans, designed “to help determine whether an account likely belongs to someone under 18, so the right experience and safeguards can be applied to teens”. The tool analyses user behaviour, including how long an account has existed, typical times of day a user is active, and usage patterns over time, to estimate age. OpenAI says this approach allows it to apply protections without collecting sensitive documents such as passports or driving licences.

Several child-safety experts told The Observer that while age-assurance tools are an important first step, age-prediction models remain largely untested at scale. OpenAI says it believes its model’s accuracy outperforms industry standards, but has not released data to support its claims.

Sam Stockwell, a researcher at the Alan Turing Institute who specialises in online safety says the tool is “promising to see”, but that he has “a few concerns”. His main worry is that behaviour-based age prediction isn’t reliably tied to someone’s real age. Adults can have interests that look “young” on paper, he says. Children, meanwhile, may learn to bypass the system.

Leanda Barrington-Leach, executive director of the children’s rights group 5Rights Foundation, says OpenAI’s approach also raises legal questions under UK and EU data-protection law. Under GDPR and the UK’s Age Appropriate Design Code, she says, companies face strict limits on how children’s data can be processed. “You cannot use the data of a child under 13 without parental consent,” she says. OpenAI says the new system is designed to minimise data collection.

Musk left OpenAI’s board in 2018 and is now suing the company, alleging it has abandoned its original mission to build AI for the benefit of humanity. He is seeking to block its ongoing corporate restructuring, and wants up to $134bn in damages.

Musk, however, cannot claim the moral high ground. Earlier this month, his own AI chatbot, Grok,, which is integrated into his social media platform X, was used to generate non-consensual sexualised images, including digitally “undressing” people, and to produce illegal child sexual abuse material, according to the Internet Watch Foundation.

Zoe Hawkins, co-founder of the Tech Policy Design Institute, says “the significant harm caused by the explicit capabilities of Grok is an important cautionary tale in this policy conversation”, illustrating how quickly AI products are being deployed, with regulation struggling to keep pace. Another expert described enforcement efforts as “whack-a-mole”.

While ChatGPT is already overseen by the UK regulator Ofcom because it is classified as a “search service”, the launch of an erotic feature would stretch a regulatory framework that was not designed with conversational AI in mind.

An Ofcom spokesperson said: “The law is clear: sites or apps that allow pornographic content, including AI-generated material, must use highly effective age assurance to prevent children from readily accessing it.” They added that Ofcom “will not hesitate to use the full force of our powers against any regulated service that fails to protect people in the UK, particularly children”.

Jain and Edelson, the lawyers, say the risks to adults alone should give OpenAI pause. “What I fear is that erotica is simply too enticing as a revenue stream.”

Photographs by Alamy, Florian Gaertner/Getty Images, and courtesy of the Raine family

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions