Turing turmoil: UK’s hope to lead world in AI is at war with itself

Turing turmoil: UK’s hope to lead world in AI is at war with itself

Ministers are worried, partners are mulling legal action, and top managers are accused of losing their grip


In March last year, the then chancellor Jeremy Hunt awarded £100m in government funding to the Alan Turing Institute in one of the largest financial commitments to a research body in the UK in the hope that it would be at the forefront of British capability in artificial intelligence. Yet just over a year later, the Alan Turing Institute is at war with itself.

Staff have accused leaders of presiding over chaos; universities are threatening legal action over cancelled partnerships and funders are reconsidering support. In December 2024, a letter of no confidence in the leadership was signed by 93 staff and called on the institute’s board to “urgently intervene”.


Newsletters
Sign up to hear the latest from The Observer

For information about how The Observer protects your data, read our Privacy Policy.


And last week – after little response to the initial revolt – “serious concerns” were raised in a whistleblowing complaint to the Charity Commission, which has yet to launch an inquiry.

Ministers, too, are losing patience. Last month, the technology secretary, Peter Kyle, called for new leadership at the institute. He told the institute to pivot toward defence, national security and “sovereign capabilities” – and threatened to pull funding if it did not. Public sector funding represents almost half of the Turing’s annual income of more than £50m.

Yet on Friday, in a letter seen by The Observer, Turing’s chair, Doug Gurr, said the organisation would “step up at a time of national need” but the former UK boss of Amazon, who joined in 2022, stopped short of committing to either a change in leadership or a defence-focused mission.

A former McKinsey consultant, Gurr serves as interim chair of the Competition and Markets Authority and is director of the Natural History Museum.

At the centre of much of the unrest appears to be the leadership of Dr Jean Innes, who joined the Alan Turing Institute as chief executive in July 2023 to spearhead a strategic overhaul known as “Turing 2.0”.

Innes was tasked with refocusing the institute after a review from the UK Research and Innovation funding body found that the Turing lacked effective governance between 2015-2023, leaving it underperforming

“There was never a clear vision or strategy,” said one longstanding employee about the early days of the Turing. “The operating assumption seemed to be that if a critical mass of smart people gathered, things would just work.”

Several observers acknowledged the institute had delivered a number of respected research projects, but the November 2022 launch of ChatGPT dealt a reputational blow. Many saw it as evidence that the institute had missed the boat on large language models – a shortcoming underscored by a highly critical and widely publicised 2023 report from the Tony Blair Institute.

Institute chief Jean Innes greets foreign secretary David Lammy and his French counterpart Jean-Noel Barrot

Institute chief Jean Innes greets foreign secretary David Lammy and his French counterpart Jean-Noel Barrot

Others say such criticism is unfair. “ChatGPT is unprecedented in as much as it was the fastest uptake of any technology in history,” said one senior academic. But the damage was done.

In May 2023, the Royal Statistical Society’s Martin Goodson said the institute was “at best irrelevant to the development of modern AI in the UK”.

Innes, another former Amazon executive, had never been a chief executive. “She has had a very difficult job in trying to come in, understand the ship, lead a research organisation – which she's never done – and steer the ship mid-course,” said a senior source working closely with the institute. Many agreed she was well-meaning but lacked the experience to meet the challenge of leading an institute in what is perhaps the fastest-moving field in history.

Under Innes, who is paid more than £200,000 a year, Turing 2.0 is still being rolled out – nearly two years on – in a drawn-out, chaotic process that has deepened uncertainty instead of ending it. The new strategy initially promised to centre efforts on three “grand challenges”: defence, healthcare and the environment. About 10% of the staff were told they were at risk of redundancy at the time – a restructure that is still ongoing.

In February 2024, three leading AI academics – Marc Deisenroth, Aldo Faisal and Andrew Duncan – were hired as scientific innovation directors to lead the grand challenges. Within months, they had quit. Jon Crowcroft, Innes’s special adviser, told The Observer the challenges were hindered by “extraordinarily slow decision-making or lack of decision-making” by senior management, who were unable to decide which staff to allocate to which project.

It was like watching something through slow motion, it was very odd

Jon Crowcroft, special adviser

“It was like watching something through slow motion, it was very odd,” Crowcroft added. Sources close to the directors said that they worked extensively to develop the challenges, but were constrained by a lack of clarity and commitment from management. People with knowledge of the institute said any organisation focused on fast-moving technologies will experience high turnover of staff at a time of organisational change.

The wider redundancy process was “absolute chaos” and “ill-prepared”, claimed several anonymous sources. In more than one case, staff were hired and told they were being made redundant in the same week. Younger staff were at times in tears in the office.

Employee satisfaction surveys, first revealed by Alex Chalmers, found that only 11% of staff had a positive view of their leadership by spring 2024. By the end of the year, they were in open revolt. Throughout the chaos, sources say Jean Innes has failed to reassure an increasingly irate staff, in part by failing to present a plan for the restructuring, and by not communicating an inspiring vision about the potential of science and technology to benefit humanity.

Staff even play a game of “bullshit bingo” where they write down management’s corporate catchphrases, including the phrase “change is hard for everyone”. (“Change is hard for everyone, but not for [Jean Innes] apparently,” said one current staffer. “She’s not changing.”)

The recent whistleblowing complaint alleged a “toxic” culture at the institute. Most staff who spoke to The Observer disagreed with that characterisation. Yet they described the mood as frustrating, sad and bordering on farcical. “I learned the definition of being gaslit from working at the Turing,” said one staffer who left in the past year. Three others described their time at the institute as “traumatic”.

Partners and other stakeholders are frustrated too. According to a document seen by The Observer, the institute tried to cancel a partnership with Oxford University which it had approved, saying it no longer “matched the Turing realignment process”.

In an appeal letter dated January 2025, a member of staff at Oxford’s computer science department said: “Turing changing its mind about what its research priorities are is not one of the conditions that permits cancellation. And, I have to say, the legal team here would go nuclear at any suggestion of terminating the grant prematurely: there is simply no case according to the contract.” Staffers claim this incident was a result of senior management’s chaotic decision-making, and that the institute risks angering other partners whose funding is crucial to its future.

According to two sources, several concerned stakeholders have been in touch with the institute following the letter to the Charity Commission. “It appears already some are unlikely to pursue funding relationships that would have provided job security and growth,” said one source, referring to one of the Turing’s health-related projects.

In a letter to staff, Turing’s senior management acknowledged a “challenging” substantial organisational change, which they said would be completed in the autumn. It said the restructuring would “ensure we are set up to deliver focused, high-impact work at scale for the national need”.

It added: “We are committed to conducting our business with honesty, integrity and transparency and believe that a culture of openness and accountability is essential”, and said they “encouraged people to report suspected wrongdoing as soon as possible in the knowledge that their concerns will be taken seriously and investigated as appropriate, and that their confidentiality will be respected”.

A spokesperson for the institute, which is based in the British Library, said it was “responding to the national need to step up our work in defence, national security and sovereign capabilities and making sure we drive forward other high-impact work that supports the government’s missions and the interests of our funders.”

But the Turing’s response has left many staffers cold. They fear that, without a change in leadership, this vision will not materialise.

Analysts are also concerned. “There's a significant public interest problem if you have millions of pounds’ worth of taxpayers money going into a flagship institute… where the ministers, the department, the funders and the staff have all de facto lost confidence in the senior leadership and the board,” says Ben Johnson, a science and technology policy expert. “I think that that represents a really damaging kind of policy failure, which undermines trust in politics. It undermines our position on AI”.

Other countries have shown how national AI institutes can work. Canada alone has Mila, the Alberta Machine Intelligence Institute and the Vector Institute; Germany has the Max Planck Institutes. These centres occupy a special position between universities, big tech and government, coordinating high-impact projects unsuitable for individual institutions,  keeping sensitive data – such as NHS records – within national control, and signalling a commitment to technological sovereignty.

“We should have a national institute for AI,” said one former staffer. “Just not this one.”

In part because of their belief in the power of these institutes, current staffers are organising and fighting for the future of the Turing. There is “communal feeling that the Turing is something we need to preserve and it has all this potential for greatness”, said one former staffer.


Photographs by Ashley Cooper; James Manning/Getty Images


Share this article