Every day, thousands of abusive messages flood the social media of the world’s top athletes. In just the last fortnight Katie Boulter has described receiving regular death threats, while Emma Raducanu’s stalker tried to buy Wimbledon tickets. As The Observer reported last week, this issue plagues professional sport – especially women’s sport – and can often feel too vast to solve.
But one company is trying to change that. In a modest office in central London, a small team is tackling the flow of online hate, and sometimes even winning that fight.
Signify Group has only 20 core staff, plus 25 analysts stationed around the world, yet they work across more than a dozen sports. Their client list includes World Rugby, Fifa, the WTA, Wimbledon and World Athletics.
Using a combination of their unique artificial intelligence model and human data collection, they sift through the abusive content so the athletes don’t have to. They are also trying to deter the trolls. “Moderation is like a plaster, whereas our service is like the surgery,” says John Zerafa, Signify’s strategic adviser.
When they started five years ago, trying to convince sports to invest in protecting athletes online was not easy. “They said they didn’t have the budget,” Signify CEO and co-founder Jonathan Hirshler says. “That’s changed. There was a realisation social media platforms can only do so much – or were only willing to do so much – and that organisations could do more to protect their players.”
Signify invited The Observer into their offices to get a full sense of their process. First, clients give them a list of athletes and staff. Signify then gather all of their social media handles, and put their AI ‘Threat Matrix’ to work ingesting every social media mention of every player on their system, on a daily basis. Athletes can also opt in to Signify’s extra monitoring service, which views private messages they receive from strangers.
Signify developed 20 different categories of key words, emojis and images that can be problematic or abusive (ranging from comments about body image to racism, sexism, sexual violence and death threats).
The AI then flags matches to the analysis team. This human step in the process is what, according to Hirshler, sets Signify apart from other companies who simply use AI as a moderator.
“Our team, with our human eyes, actually reads the messages – and that can be a hard slog,” Hirshler says.
“But that means we can assess what’s driving this abuse.”
Signify will report abusive posts to the social media platforms, or escalate particularly bad content to their in-house investigations team. That’s where Jonathan Sebire, co-founder and head of investigations, comes in.
He has seen it all. Rape threats, fixated individuals, gambling addicts and more. His job is to assess when that threat has the potential to become a danger to the athlete. One stalker Sebire’s team identified targeted a World Athletics competitor, sending hundreds of graphic sexual images and claims they were in a relationship.
Because of the uptick in the messaging frequency and the individual’s proximity to the athlete, Signify escalated the issue to law enforcement. “We are able to identify when someone goes from being a fan to a Monica Seles type situation,” Sebire says bluntly, referring to the 1993 attack when Seles was stabbed on court by an obsessive fan.
Understanding the motives behind abuse can differ between sports, but is key to finding solutions. In football, working with Arsenal, Signify flag when accounts affiliated with fans are behind abusive messages, and the club has banned more than 40 season ticket holders as a result. By explaining this policy to fans, Arsenal saw a nearly 90% decrease in the number of affiliated fan accounts sending abuse to their own team. “It shows that the deterrent effect works,” says Hirshler.
Female players and officials are 30% more likely than men to receive abuse online, and Sebire says they will often receive more sexualised abuse or abuse relating to their appearance.
This is manifesting itself in new ways. Signify’s head of sport, Jake Marsh, relays how in diving they identified a disturbing trend where trolls are sourcing images of female athletes, using AI to de-clothe them, before spreading these images.
In tennis, Signify are monitoring social media accounts for more than 8,000 players year-round. In a report with the WTA and ITF released last week, the two standout findings were that 40 per cent of abuse in 2024 came from gamblers, and 10 accounts (the majority run by men) were committing 12 per cent of overall abuse. Nine of those accounts have been taken down or had their posts and comments removed from the platform.
The challenge is that tennis players are independent contractors, relying on security at tournaments to keep them safe. Hirshler says Signify have shared a number of names of people showing signs of fixated behaviour to the tours’ security over the last few months.
Some inevitably fall through the cracks though, as the incident in February when Raducanu spotted her stalker in the crowd during a tournament in Dubai proved.
“These players in tennis are out there by themselves, it makes them vulnerable,” says Hirshler.
“The clients we work with, more and more, are asking us to be on the ground, embedded within their security apparatus so they can quickly get the detail of when we’ve seen something online.” As a result, a small team from Signify will be at Wimbledon throughout the Championships.
Signify say the next coordinated deterrent needs to be buy-in from gambling companies. Following their WTA and ITF report, one American gaming company has already warned customers they can suspend gambling accounts of those who abuse athletes online.
“That’s a real tangible effect of our work,” Hirshler says. “The key word for us is proactivity. We don’t believe that dealing with this problem has to be a victim-led solution. We can do that on their behalf, we can look at the source of the problem, and we can build deterrence.”