The UK is following what has been called a “whack-a-mole” approach to big tech regulation.
Ministers responded last week to the backlash over Elon Musk’s AI chatbot Grok by fast-tracking legislation to ban the creation of non-consensual intimate images. While there is broad support for strengthening the law, experts warn the changes may not go far enough to stem harms from generative AI chatbots.
“It looks like we are behind the curve, because we are,” says Harriet Harman, a former deputy Labour leader. “And it looks like we’re running to catch up, because we are. And it looks like we’ve got a scattergun approach, because we have.”
This includes, she said, failing to clarify what the law should define as “intimate” imagery. Although in the US it is often defined as depicting nudity or underwear, backbenchers and ministers argue the row over the creation of non-consensual images of women and children in bikinis and wet T-shirts using Grok shows a significant weakness in this approach.
Liz Kendall, the technology secretary, has already acknowledged that the law, aimed at “nudification” apps, may not apply to Grok at all.
“The nudification ‘ban’ is definitely not a solution to Grok as it won’t apply to Grok,” says Clare McGlynn, a professor of law at Durham University. The offence is expected to apply only to apps designed specifically to create non-consensual intimate imagery. Grok, by contrast, is a general-purpose AI model capable of producing images, text and code, and would likely sit outside the scope of the law.
Writing to the Labour MP Chi Onwurah, Kendall said Grok might not be covered by the proposals. “We have identified that not all chatbots are covered and I have commissioned officials to look at how this gap can be addressed,” she wrote.
On Wednesday, X said it would “geoblock” users’ ability “to generate images of real people in bikinis, underwear, and similar attire” in areas where this was illegal. It is unclear whether similar images could still be generated using Grok’s standalone app or website. xAI, Grok’s parent company, did not respond to a request for comment.
Related articles:
The debate is unfolding against a backdrop of amid rising concern about technology-facilitated violence against women and girls (VAWG). Around one in 10 recorded offences involving VAWG already has a digital element, a figure experts believe significantly underestimates the true scale. Younger people are at higher risk as more spend time online. Artificial intelligence, campaigners warn, can be a “harm accelerant”, allowing abuse to be generated and disseminated at scale.
Experts warn that other AI-chatbot controversies are likely to emerge in the near future. Michael Birtwistle, associate director at the Ada Lovelace Institute, an AI research body, says future flashpoints could include children receiving sexual interactions from chatbots, or AI assistants dispensing questionable health or financial advice.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
“The solution to that, in our view, is placing requirements on those building and hosting the models themselves,” he says. That approach is “geopolitically very difficult”, he adds, particularly given resistance from the US. Musk has previously characterised the UK government as fascist. “Otherwise, you’re playing whack-a-mole with the symptoms rather than the root cause.”
At the heart of the debate is Ofcom, the regulator enforcing the Online Safety Act, which places safety duties on services such as social media and search. The developers of general-purpose AI models are not necessarily part of its remit, even though their tools can be plugged into many products at once. One technologist told The Observer that this exposes the need for a new layer of AI governance, potentially requiring a dedicated regulator able to impose safety obligations directly on model developers.
Harman says parliament needs structural reform to keep pace. She proposes that, at the start of each parliament, MPs establish a permanent, well-resourced, cross-party committee with the power to legislate on technology issues. Unlike temporary bill committees, its members would remain in post long enough to build deep technical expertise.
“You can’t ever expect government to be ahead of the curve,” says Harman. “But it just needs to be not so far behind the curve. I think parliament can guard the public interest when it comes to tech in a sensible way, but to do so it has to change its processes.”
Senior figures across Labour and the government say the furore over Grok has hardened support for bold action on children’s online safety, including a possible ban on social media use for younger teens. While critics point to evidence from Australia that children can circumvent such bans, supporters argue the aim is to send a clear signal that platforms like X are not safe for children, and to alert parents that they are not regulated in the same way as cinemas or video games.
MPs are also pushing for any restrictions to include YouTube, citing its algorithmic design and high levels of use, as well as messaging services such as WhatsApp and Snapchat. This goes further than the current under-16s ban in Australia. Ministers increasingly accept, insiders say, that the choice may be between attempting to regulate the entirety of social media, which is seen as impractical, or imposing sharper safeguards around access.
Similar problems exist in policing, says Giles Herdale, an independent expert on policing tech-enabled VAWG. “Although the internet has been around for over 30 years, this is still seen as an emerging issue by policing rather than a core business,” he says, warning that enforcing nudification laws could prove difficult in practice because law enforcement remains “behind the curve”. Frontline officers often lack the digital skills needed to investigate such offences, he says, while violence against women and girls is systematically deprioritised in favour of “high-status” areas such as counter-terrorism.
He argues for mandatory data-sharing from big tech platforms and specialist teams akin to those tackling child sex abuse material.
Deputy Assistant Commissioner Helen Millichap, director of the National Centre of Violence Against Women and Girls and Public Protection, said: “Policing is committed to reducing the prevalence of violence against women and girls, in all its forms, and welcomes any measures that seek to strengthen a wider-system response that keeps people safe.
“However, we cannot prevent the scale of this offending through policing activity alone. Measures that eliminate the risks of the harmful use of technology at source must be prioritised and we will continue to work with others with that collective intent.”
Photograph by Chesnot/Getty Images




