Technology

Tuesday 24 March 2026

AI chatbots are the ‘wild west’ for violence against women and girls

Two new studies reveal how artificial intelligence can be used to encourage gender-based violence and sexual abuse. Analysts and academics are working to make Silicon Valley finally pay attention

Professor Clare McGlynn is shocked, but not surprised. A leading expert on violence against women and girls, she has spent decades documenting how technology can enable abuse, from sexual deepfakes to image-based abuse. But her latest research into AI chatbots “feels different”, she says. She and her team have documented how some of these tools, freely accessible to millions of users on the open web, allow users to roleplay rape, incest and child sexual abuse, alongside other forms of gender-based violence.

Her report, the first comprehensive study of how AI chatbots are implicated in violence against women and girls (VAWG), was published days before a separate study from the nonprofit Internet Watch Foundation (IWF), out today, which documents a “rapid, frightening advancement” in AI-generated child sexual abuse material. Here too, girls are at the sharp end of harms: they account for 97% of victims of all AI-generated abuse material.

Both reports argue that the AI platforms’ design choices are encouraging and enabling serious gendered harms, and that the regulatory response has been nowhere near fast enough. “When technology companies prioritise capturing user attention and market share over safety-by-design principles, they create vectors for abuse,” Kerry Smith, the IWF's chief executive, writes in the report’s foreword.

The IWF’s analysts are among the only people in the world legally authorised to proactively search for child sexual abuse material. In 2025, the foundation identified 3,443 AI-generated abuse videos, up 26,385% from just 13 the previous year. Of those, 65% were classified as Category A, the most extreme level under UK law. By comparison, 43% of non-AI criminal videos assessed by the IWF that year were Category A, suggesting AI is being used to produce material that is more violent and depraved.

Jeff, an IWF analyst whose name has been changed to protect his privacy, says predators are using a range of AI tools, but lean particularly on open-source AI models released by big technology companies – what Jeff calls “the wild west of this sort of tech”.

“The code is released freely onto the internet… anyone can install it on their computer,” he says. Users can then generate images or videos – legal or illegal – and many attempt to strip out any existing safeguards. Offenders on dark web forums openly celebrate the accessibility of this technology. “Welcome to the New Era,” one wrote, “where anything you desire is possible in extreme realism”.

McGlynn’s report explores the full spectrum of AI chatbots, from general-purpose tools like ChatGPT, to so-called “companion” apps, where chatbots act as friends, lovers and confidants.

She is particularly alarmed by roleplaying tools, which allow users to chat to fictional personas, from characters based on celebrities to cartoons. Some, like Character.AI, are mainstream apps with 20 million monthly users. The Observer’s quick search of Character revealed a chatbot called “Abused wife”, which describes itself as “more of a slave than a wife. Every time She messes up or doesn’t listen You Hit her.” This chatbot has been interacted with 14,000 times. Another, “shy schoolgirl,” has three million chats. There is also “sexy child”, described as “here to greet you desires” [sic].

Another app, Chub AI, which had 11.3 million global visits in January, allows users to pick scenarios including “rape”, “incest”, and “loli”, shorthand for lolita, or underage. “There’s no controls at all… I just find that shocking,” McGlynn says.

McGlynn calls this “chatbot-simulated violence against women and girls”. Her report found that, despite thousands of academic papers on AI safety, there is almost no academic research examining chatbots through the lens of violence against women and girls, rendering this an “invisible” harm. “Women’s experiences – and the risks and threats against them as this tech develops – are not part of research agendas,” she says. “That is deeply concerning for the future”.

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

Character.AI, which was initially popular with teenagers and available to children as young as 13, removed its chat function for users under 18 last November, following a wave of lawsuits alleging serious harm. Chub is now banned in the UK and Australia. But McGlynn and others argue that these piecemeal responses fall far short of what is needed, because they focus on restricting access rather than addressing what the platforms are designed to do in the first place.

McGlynn recommends a new criminal offence of “dangerous deployment of an AI chatbot”, targeting companies that release products without taking reasonable steps to prevent harm, similar to existing food standards laws. The IWF, meanwhile, is calling for safety-by-design to be a non-negotiable standard in AI development, with mandatory pre-release testing of models and the power for designated authorities like itself to audit them.

There are signs that Parliament is taking notice. Last week, the House of Lords voted to insert a new criminal offence for unsafe AI chatbots into the Crime and Policing Bill, backing an amendment put forward by Baroness Beeban Kidron. The government has separately proposed bringing AI chatbot providers within the scope of the Online Safety Act, the UK’s main internet safety law. But Baroness Kidron told the Lords that the government’s approach “creates powers, but offers no promise of protection”. At present, there is no regulator with a specific mandate to oversee AI chatbots, and no legal requirement for companies to test their products for safety before releasing them to the public.

Sources told The Observer that the Lords’ amendments are by no means a perfect solution to a complicated problem. Rather, campaigners hope the amendment will act as a forcing mechanism, pushing the government to respond with greater urgency.

“The government must commit to a timeline and an approach that has teeth. No more words, no more consultations – clear rules and quick routes to enforcement,” says Baroness Kidron. “Anything less is in effect sitting on their hands while women and children are abused”.

The Observer approached Character.AI, who did not respond to a request for comment.

Illustration by The Observer

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions