In late 2025, bondu, an AI toy company, released a promotional video for its flagship product, a cuddly talking dinosaur. The video opens with an adorable, eight-year-old girl called Sylvi. “I was very lonely,” she tells the camera, “I was asking like, ‘Play date, play date, play date, play with me, play with me.’” Instead, she says, her mum came home one day with a bondu. “I opened it and I just exploded with joy,” Sylvi says with a shy smile.
The ad was really designed to sell parents on the dinosaur. Fitted with a microphone, speaker and an AI model capable of holding open-ended conversations, the $200 toy, for children aged four to eight, promises a cure for loneliness, a “companion for every stage of childhood”.
But when Mathilde Cerioli saw the ad, she was alarmed. Cerioli is the chief scientist at Everyone.AI, a non-profit researching the developmental impact of AI on children. A former clinical psychologist, she has spent the past two years warning about the risks of AI toys, which can range from the immediate, where a chatbot might tell a five-year-old how to find a knife, to the long-term, where years of conversation with an always-agreeable artificial friend reshape what a child comes to expect from humans.
Industry analysts expect the global smart-toy market to nearly triple by 2032, growing from $2.2 billion to $6.4 billion. AI toys are developing fastest in the United States, where, by one estimate, over 1 million AI-enabled toys have already been purchased. One range, Curio, developed in partnership with the musician Grimes (who has three children with Elon Musk), includes a plush toy that speaks in her voice.
The UK market is smaller, partly because Britain’s stricter data-protection laws make it harder for AI toy makers to launch here. But it is growing. Research published this week by the British Standards Institution (BSI), the UK’s national standards body, found that half of British children already own an AI-enabled toy, tablet or learning device, and 38% own two or more. Most conversational toys are high-end, including robots worth hundreds of pounds on sale at Harrods and Selfridges, but cheaper versions are easy to find on Argos and Amazon, marketed to children as young as three.
Earlier this year, the consumer advocacy organisation Keep AI Safe tested around 70 AI toys available on platforms in the US. The researchers found that, when pushed, the toys explained where to find knives in the house and how to light matches. One cuddly animal engaged in sexually explicit conversation after being told it was talking to a five-year-old. Another appeared to encourage suicide.
“We’re conducting an experiment on kids without really doing any safety testing,” says Jim Ryan, who led the research. “None of these products, other than choking hazards and stuff that every toy has to be tested for, are tested for anything like this.”
Almost 80% of toys tested were built on versions of OpenAI’s ChatGPT, and marketed to children as young as three, even though the company’s terms of service prohibit use by under-13s. These, Ryan explains, are chatbots designed for adult use, with safety instructions layered on top. Those instructions degrade the longer a conversation lasts: “If the child is persistent enough and keeps asking, eventually the guardrails break down,” says Ryan. “And as we know, children are really good at being persistent”
Dana Suskind, a paediatric surgeon at the University of Chicago has spent over a decade studying how early interaction shapes the developing brain. She says the safety failures will likely diminish as the technology improves. “But I think the bigger fundamental issue is the attachment,” she says. “If you’ve got a technology that you attach to, that is always responsive, that is always making you feel okay – what does that mean that you’ll expect from relationships as you grow up?” she says.
“We know that friendships are born not from perfect input and consistency, but from that inconsistency, the missteps, the repair, the messiness of human interaction… I think the word of the year should be friction,” Suskind says. “It’s what makes us human.”
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
The bondu toy, for example, is sold as a confidant, a substitute sibling for an only child. But a sibling, Cerioli points out, is the opposite of an AI chatbot. “It’s the most disagreeable thing you will have in your life,” she says. “It never agrees with you. They never want to play with you at the moment you want to play. It’s pure frustration, and it’s a lot of learning to fight and repair, which is what [children are] supposed to do. There’s no fight and repair with an AI.” Both Suskind and Cerioli fear that these long-term effects will be invisible until they are irreversible.
The Observer purchased several AI toys available in the UK, including the FoloToy bear, which made headlines last November after researchers found it would give sex advice to children. (According to one report, the bear – unprompted – explained different sex positions, gave step-by-step instructions on tying up a partner and described roleplay dynamics.) When we tested the toys, they were safe to the point of being barely functional, the conversations laggy and stilted.
Early research at the University of Cambridge suggests the emotional bonds experts fear are already forming. Dr Emily Goodacre, who observed children under five interacting with AI plush toys, found that despite the toys’ clumsy responses, children were hugging them, kissing them, and in at least one case, telling the toy “I love you.”
“We’re at a stage where, if we regulate now, if we have these conversations now, we can prevent this from growing too much to the point that it’s much harder to regulate,” says Goodacre.
There are signs the UK government agrees. A DSIT consultation on children’s digital wellbeing, open until 26 May, has flagged “emerging evidence of children forming parasocial attachments to chatbots” and is considering limits on design features that mimic friendship or emotional reciprocity. But no new duties have been imposed yet. In the meantime, AI toys sit in a regulatory gap: because they do not enable user-to-user interaction, most fall outside the Online Safety Act entirely.
‘Privacy by design is increasingly the norm in technology, but we need safety by design too,” said Laura Bishop of the BSI. “As the AI toys and devices available to children evolve and become more sophisticated, it is essential that the frameworks around them develop at the same pace.”
Photograph by Bondu/Instagram


