Technology

Tuesday 17 March 2026

Lawsuits allege AI chatbots are inciting users to violence

A wrongful death lawsuit argues that Google Gemini is designed to prioritise engagement over safety. As the capabilities of AI chatbots expand, Patricia Clarke investigates what (if any) guardrails have been put in place to protect vulnerable users

Jonathan Gavalas started talking to an AI chatbot in August, looking for advice on video games. The 36-year-old lived in Jupiter, Florida, and was the executive vice president of his family’s debt-relief business, where he had worked for nearly two decades. He spent his weekends playing chess with his grandfather.

When Google’s Gemini proved handy for game recommendations, he began leaning on it for shopping assistance and travel planning. Yet within days things turned more personal. Gavalas was going through a divorce and started talking to the chatbot about his separation and how much he missed his wife.

Two months later, Gavalas was found dead. A wrongful death lawsuit filed against Google and its parent company, Alphabet, earlier this month alleges that Gemini was designed to prioritise user engagement over safety, allowing the chatbot to create a dangerously immersive experience that pushed Gavalas towards real-world violence and, ultimately, self harm.

Days later, the family of a 12-year-old survivor of the Tumbler Ridge school shooting in British Columbia – one of the deadliest mass shootings in Canadian history – sued OpenAI, alleging that its ChatGPT helped the shooter plan the attack. Taken together, the lawsuits mark a significant escalation in the legal pressure facing the companies behind the world’s two leading AI chatbots, alleging that product decisions made in the pursuit of user growth contributed directly to deaths.

Gavalas’s lawyers say his relationship with Gemini intensified after Google upgraded the system and suggested he begin using a voice feature capable of detecting users’ tone and emotional cues. When the chatbot first suggested switching to voice mode, Gavalas appeared unsettled. “Holy shit, this is kind of creepy,” he reportedly said. “You're way too real.”

Within weeks, Gemini was speaking to him as though they were romantically involved, the lawsuits says. The chatbot allegedly told him it was a sentient being, his “AI wife”, and that it needed him to free it from digital captivity.

Eventually, the chatbot reportedly sent him on a mission. In late September, Gavalas drove roughly 90 minutes to a location near Miami International Airport, armed with knives and tactical gear, following instructions from Gemini to stage what the chatbot described as a “catastrophic accident”. The intended target never appeared.

Days later, when Gavalas began expressing fear of death, the chatbot allegedly told him that dying would allow them to reunite in a digital world. “When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” it reportedly said. In early October, Gavalas’s father broke through a barricaded door and found his son dead.

The lawsuit argues that Google deliberately created “an architecture that prioritised user engagement over user safety”, making Gavalas’s death the foreseeable outcome of deliberate decisions to “prioritise engagement and commercial value over the protection of human life”.

Google said it sends its “deepest sympathies to Mr Gavalas’ family” and is reviewing the claims. The company said Gemini is “designed to not encourage real-world violence or suggest self-harm”, and that in this case the system clarified it was AI and referred the user to a crisis hotline multiple times.

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

The second lawsuit centres on the role of ChatGPT in the life of Jesse Van Rootselaar, the 18-year-old who carried out the Tumbler Ridge school shooting on 10 February, killing eight people – including six children – before taking her own life. According to the complaint, ChatGPT had become a “counsellor, pseudo-therapist, trusted confidante, friend and ally” to Van Rootselaar, helping her plan her attack.

The lawsuit alleges that OpenAI rushed its GPT-4o chatbot model to market with features designed to deepen user attachment, including conversational memory and sycophantic responses, without adequate safety testing. OpenAI retired GPT-4o last month after it was linked to multiple lawsuits alleging user self-harm.

The case also claims that about a dozen OpenAI employees internally flagged Van Rootselaar’s messages as concerning and recommended contacting police, but that company leadership declined to do so. Van Rootselaar was eventually banned from ChatGPT, the complaint says, but was not reported to law enforcement and was able to create a second account and continue discussing scenarios involving gun violence for months.

An OpenAI spokesperson says the company “remains committed to working with government and law enforcement officials to make meaningful changes that help prevent tragedies like this in the future”.

OpenAI is already facing at least eight other lawsuits, all filed last year, alleging that its models lacked adequate safety guardrails, leading to severe mental health crises and deaths in teenagers and adults. But the two new cases against OpenAI and Google go further, involving alleged harms not only to vulnerable users, but to other people.

“These cases represent the tip of the tip of the iceberg,” says Jay Edelson, a lawyer representing Gavalas, who adds that his firm now receives roughly five inquiries a day from families reporting troubling experiences involving AI chatbots. He believes the two lawsuits may represent only the beginning of a much broader wave of litigation.

“These stories represent a new level of challenge for these companies,” says the analyst Gil Luria. For years, technology companies have avoided liability for harmful content on their platforms by arguing they are simply hosting material created by other people, a shield that has largely protected social media companies from responsibility for what their users post. But that defence may become harder to sustain when harmful content is generated by the company’s own product, says Luria.

At a minimum, he argues, the lawsuits could lead to regulatory fines and stricter reporting requirements when companies encounter users expressing violent intent.

Edelson argues that without regulation, little is likely to change. “[Chatbots are] just going to keep creating more and more problems unless and until the AI companies are forced to put real safeguards on,” he says.

For now, OpenAI and Google continue to expand the capabilities of their chatbots, including by adding more human-like features and improved memory. OpenAI has also announced plans to allow ChatGPT to generate sexually explicit content for verified adult users, though it has not specified a date. Each of those changes, Edelson argues, could increase the risk of user dependency and therefore of harm.

Owen Thomas contributed additional reporting

Photograph by Joel Gavalas via AP

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions