Technology

Friday 8 May 2026

Passing the buck: how AI caused a harrowing month for a small charity

When calls to its crisis helpline increased tenfold, a small US organisation that works with journalists and activists went into meltdown. AI was directing suicidal children to get in touch

In late September 2025, a support worker at the charity Vita Activa sat down to begin her shift monitoring the organisation’s crisis helpline. Several messages had come in overnight, but one stood out: it was just three lines long and began: “Hello, I don’t want to live any more.”

Alarmed, the staffer asked if they were a danger to themselves or others. Hours later, a reply came back. “Thank you so much for your support,” it read. “We are grateful to you, but our daughter is no longer alive.”

Luisa Ortiz Pérez, co-founder of Vita Activa, says the message was the culmination of a harrowing month for the charity. In September alone, its crisis helpline received 537 requests for support – more than ten times the usual caseload. The majority of messages were from suicidal users, most of them children.

But Vita Activa is not a suicide helpline. It is a US-based charity with 19 staff that provides free, confidential support to journalists, activists and human rights defenders across Latin America and beyond.

Vita Activa’s Spanish-language helpline principally serves people in Mexico, Argentina, El Salvador and Colombia, helping them deal with problems such as burnout, online harassment and the psychological impact of covering conflict and violence. In a normal month, the charity handles between 40 and 60 cases, each taking roughly three hours to process.

“And then comes the summer of 2025,” says Ortiz Pérez. Herself a frontline responder, she recalls opening the organisation’s chat app one morning and scrolling through the queue. Messages were pouring in, and they were alarming. “Quiero desvivirme,” said one, translating as “I want to unalive myself”, gen Z slang for suicide. Other messages read: “I want to die. I want to kill myself. I am not staying here for long.” Who was sending them to their helpline?

Vita Activa responders always ask users their age, and the responders were overwhelmingly minors, some as young as 13. Their messages were laden with emojis and, while they came from the same four countries that Vita Activa’s Spanish helpline usually serves, they didn’t match Vita Activa's usual user base of journalists and activists.

At first, the team was baffled by the sudden influx of messages. But then the users began saying they had been referred. “ChatGPT told me to come here,” Ortiz Pérez recalls users saying. “Meta AI said you could help.”

Ortiz Pérez and her team now believe the surge was driven by automated AI referral systems. At some point prior to the crisis, Vita Activa had been listed in a global directory of helplines, one used by major AI companies to redirect users expressing suicidal ideation or severe distress.

The organisation had been integrated into this pipeline without being asked, and with no control over who was being sent its way. The team experienced emotional exhaustion from constantly fielding severe mental health crises they had never been trained or resourced to handle. The responder who dealt with the message from the dead teenager’s family member has been receiving treatment for PTSD, as has Ortiz Pérez herself.

There were operational risks too. When a suicidal minor contacts your service across international borders via a chat platform, the legal obligations are complex. Vita Activa urgently consulted lawyers in both the US and Latin America, and was forced to draw up terms and conditions and a privacy policy from scratch. Desperate, and fearing for the organisation’s future, Ortiz Pérez's team sent a cease-and-desist letter to the directory operators in late September, requesting that Vita Activa be removed from the referral system. Within days, the flow of cases dropped to zero.

But that brought a new crisis. After the referrals stopped, Vita Activa’s caseload flatlined entirely. The organisation suspects it may have been deprioritised by the same algorithmic systems that had previously been flooding it with cases. Ortiz Pérez and her team spent six months doing grassroots community outreach to return to their normal operational numbers.

The countries Ortiz Pérez’s team supports have mental health infrastructure that bears no comparison to what is available in Silicon Valley, where these products are designed. Community organisations like Vita Activa exist because public mental health systems have been weakened by years of austerity and political upheaval.

And it appears Vita Activa is not alone. According to a report published by Ortiz Pérez, several other helplines across Latin America reported similar surges in cases during the same period. Most declined to be named, citing the sensitivity of their work and fears about the consequences of speaking out against powerful technology companies. For organisations operating on shoestring budgets with tiny teams, a sudden, unmanaged wave of AI-generated referrals can threaten their survival.

Meetali Jain, executive director of Tech Justice Law Project, a US-based litigation and advocacy organisation that represents families harmed by technology products, says Vita Activa’s story represents a clear failure of safeguarding. Even if a tech company follows recommended mental health protocols by flagging a distressed user and providing a referral, that referral itself must be appropriate. In Vita Activa’s case, it plainly was not. “Anyone who looks at their website knows that they have a very particular mission,” she says. “The referrals were wrong. They were inaccurate.”

At the very least, Jain argues, AI companies should be required to consult and obtain consent from the organisations to which they refer users, and ensure those referrals are culturally appropriate. In future, she believes these companies should contribute to the funding of the infrastructure that absorbs the demand generated by their products.

“Our yearly budget compared to one of these companies… I think it might be their lunch money for one day,” says Ortiz Pérez. “We were not created to pick up their slack. We were created to help people.”

OpenAI and Meta have been approached for comment.

Photograph by Nikolas Pacheco Muller/Alamy

Newsletters

Choose the newsletters you want to receive

View more

For information about how The Observer protects your data, read our Privacy Policy

Follow

The Observer
The Observer Magazine
The ObserverNew Review
The Observer Food Monthly
Copyright © 2025 Tortoise MediaPrivacy PolicyTerms & Conditions