Judges in immigration tribunals are using AI tools to generate skeleton judgments and have official approval to ask chatbots to check their decisions, The Observer can report.
It is not quite “computer says deport”, but represents a step towards the kind of automated justice system that senior judges predicted, and goes beyond what ministers have revealed about how AI is used by the courts.
Hundreds of immigration judges have been trained in using a restricted version of Microsoft’s chatbot Copilot to help them prepare for hearings and write their decisions.
Ministers and senior judges hope the technology could speed up justice, which is creaking after years of underfunding and grappling with a backlog of 140,000 immigration cases, but AI experts and barristers warned that large language models (LLMs) are unreliable and should be monitored to understand their impact on justice.
The Ministry of Justice (MoJ) and HM Courts and Tribunal Service (HMCTS) did not answer questions about whether they are measuring AI use and judges have not been asked to gather data.
David Lammy, the justice secretary, announced last February that the judiciary was “testing transcription in the courts and tribunals... and in the immigration and asylum chamber, some judges are using it to help formulate notes and write remarks.”
Sir Geoffrey Vos, the master of the rolls, suggested in February that litigants may prefer machines to deliver justice in the future as they are likely to be “far quicker and cheaper than waiting for human judges”.
According to training materials seen by The Observer, before Home Office hearings judges are encouraged to use AI to generate a “case outline” – an index of the parties’ bundles of evidence – and a “bundle summary”, which summarises the cases and evidence and creates a timeline of events. It can also generate a list of the issues in dispute between the parties and use that to populate a “decision template”.
In a training video for judges, Lord Justice Dingemans, the senior president of tribunals, describes how they can use the AI and its “decision-making tree to generate summaries of their findings relating to issues such as the anonymity of people in the case, the case background, witness statements and arguments. “All of that work is pre-done,” he says in the video. “What that will do is mean that when you get to the hearing, you will be a better judge because you're completely on top of the issues.”
After a hearing, judges are expected to deliver decisions within two weeks. They are warned not to use Copilot for legal analysis, to read all documents in their bundles and reminded they are solely responsible for writing their decisions.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
However, one of the officially approved prompts for Copilot in the training materials is labelled “critical analysis”. It reads: “Review this decision against the summary of the evidence and submissions above, identify any grammatical or typographic errors, and comment on how fully the decision addresses matters raised in the evidence and submissions, identifying any omissions.”
The distinction between “legal analysis” and “critical analysis” is not obvious and the prompt may invite the AI to cross the line from summarising and error-checking into doing cognitive work on behalf of a judge.
The Observer asked the MoJ and HMCTS if they were gathering data on how judges use AI tools, how often they use them, what the outcomes are and whether it makes them more or less likely to allow or dismiss an appeal. They did not address those questions, and it is understood judges have not been told their use of AI will be monitored and have not been asked to collect or provide any data on how it is used.
‘Any new technology and tools, including AI, must be used transparently and should be evaluated’
‘Any new technology and tools, including AI, must be used transparently and should be evaluated’
Lawrence Akka KC
HMCTS said the tool focused on transcribing decisions dictated by judges and had been developed in line with the MoJ’s responsible AI principles. The AI did not in any way contribute to the analysis or balancing of evidence or arguments presented, it said.
“We welcome the appropriate use of AI in supporting an efficient and effective courts and tribunals system,” a HMCTS spokesperson said. “However, while technology may assist in some legal work and associated administrative tasks, it cannot replace the pivotal judgment and responsibilities required to make decisions on cases."
HMCTS is believed to be developing a tool that will identify which areas of the law are relevant to a specific case and upload it to the decision template. It did not comment on this when asked.
Lawrence Akka KC, the head of the Bar Council IT Panel, said they welcomed “all attempts to improve access to justice and the administration of justice”. “But any new technologies and tools, including AI, must be “used transparently and should be monitored and evaluated to ensure they are effective.”
Tim Gordon, co-founder of Best Practice AI which advises organisations on AI adoption, said judges may already be using AI tools to summarise documents.
“So having a programme that says this is how you should do it and puts training around can be a positive thing,” he said. “One of the only ways the UK state is going to speed up the things it does is by using AI. But if they’re not measuring the output or the impact then that would be concerning.”
AI is already being used in the immigration system. Home Office caseworkers use AI to summarise interviews with asylum seekers and search policy on countries. An evaluation showed AI could save 23 minutes per interview and 37 minutes on country policy information, but 9% of interview summaries were too inaccurate to be used. Nearly a quarter of caseworkers said they were not confident with the information the tool provided.
Judges in immigration tribunals have criticised litigants and barristers for using ChatGPT and other LLMs, as they have invented false case law. For example, in August a n upper tribunal judge referred a barrister to the Bar Standards Board after he used ChatGPT to draft grounds of appeal for a client and generated a fake case to back his arguments.
Microsoft warns that its Copilot chatbot, which the judiciary uses, “can reflect societal biases”, which may mean the AI is “unfair, unreliable, or offensive”. Microsoft highlights problems with stereotyping and translations, issues likely to affect immigration judges dealing with evidence from other countries.
Last month, lawyers published a legal opinion warning that the way the Home Office was using AI to examine asylum claims could be unlawful. Barristers at Cloisters Chambers and Doughty Street Chambers published the legal opinion on behalf of the Open Rights Group, saying the Home Office’s actions do not meet legal obligations.
Photograph by Alamy



