Most of your energy goes on nagging your children to brush their teeth, find their shoes and get out of the door. Teaching them how to use artificial intelligence (AI) responsibly may struggle to make it near your to-do list, particularly when you’re still figuring out the technology yourself.
But AI isn’t waiting for you to be ready. It’s already embedded in smartphones, search results and the voice assistant that they ask random questions at breakfast. Plus, according to regulator Ofcom, around half of UK children aged eight to 17 are using generative AI, often for school work. By the time you catch up, they may be ahead of you.
So what do you actually need to know? You certainly don’t need to become an AI expert. What will help, though, is understanding how to steer your children towards a cautious, thoughtful approach – while making sure the tech’s not doing the thinking for them.
Start with thoughtful use cases
ChatGPT’s own guidance says the tool isn’t meant for children under 13. In reality, many younger children will still come across these tools via home devices. So how might you use AI in age-appropriate, supervised ways, with you firmly in the loop?
Related articles:
For younger children, you could start with AI’s image capabilities. The image generators in ChatGPT and Gemini have come on leaps and bounds this year. They let children turn simple descriptions into pictures. A friend’s seven-year-old recently asked Gemini to “draw a dinosaur wearing a top hat eating spaghetti”. He was suitably delighted.
The same tools can also be used for something more purposeful, such as bringing to life characters for a story your child’s writing, creating personalised colouring sheets, or coming up with an infographic for something they’re learning about at school.
If you have older children potentially using AI for homework, point them towards Gemini or ChatGPT’s study mode. This basically pushes back instead of handing over answers. It breaks requests into steps and nudges children to work things out for themselves, rather than happily doing the thinking for them.
Google’s NotebookLM is another impressive tool, only working with the sources your child gives it, such as their school notes or pdfs. The free version takes up to 50 sources per notebook, and from these, it can generate all sorts, such as quizzes, slide decks, mind maps and audio summaries. NotebookLM focuses on uploaded sources, so there’s less risk of it making things up. And it allows your child to learn in whatever way suits their brain best.
Teach them to question it
Many children have little instinct to doubt AI, so it’s important to point out its limitations.
These chatbots can spout rubbish and have a worrying tendency to agree with everything you say and tell you that you are the cleverest person alive.
The good news is that a dose of healthy scepticism can be taught. I tell children that AI is like a very clever parrot. It’s heard zillions of conversations and can repeat things in a way that sounds right, but that doesn’t mean it understands what it’s saying or cares like we do.
Help children to question AI-generated responses too. When a summary appears at the top of a search result, tell them it’s written by a machine and may not be right. So how can they check it’s accurate?
San Kaur Mehra, the UK parenting coach and a child behaviour specialist, adds that children often enjoy games such as spot the difference between a real and fake AI image. “Also suggest they ask Alexa questions it can’t answer. It helps children understand that AI might not know everything,” she says. I tried this with my godson. He asked Alexa what his hamster was thinking. The answer was useless, but the point landed.
Keep the conversation open
The goal isn’t constant monitoring, it’s open conversation. The biggest mistake, Kaur Mehra says, is making AI taboo. “A lot of the problems that I see come when children go behind their parents’ backs to use AI because they wouldn’t approve.”
One rule many parents find useful: if AI is involved, it shouldn’t be secret. Anything your child asks an AI should be something you can talk about together.
A few basic ground rules help, such as never share personal information with AI; show an adult any response that feels worrying or confusing and keep devices out of bedrooms at night.
Know where to get help
I’ve focused here on AI as a creative and learning tool, used with care. But there are, of course, far bigger risks, such as children forming emotional attachments to chatbots or turning to AI for mental health support when they need a real person.
These are serious, fast-moving issues. Internet Matters and Childnet are both good starting points for practical guidance on keeping children safe online.
If your child seems worried, obsessed or unusually secretive about a chatbot, talk to them and, if necessary, seek professional guidance. Beyond that, trust your judgment.
Harriet Meyer has spent more than 20 years writing about personal finance before becoming somewhat obsessed with artificial intelligence
Illustration by Charlotte Durance



