The English court warned lawyers about potential criminal liability for using artificial intelligence in court materials after discovering cases where attorneys cited non-existent court decisions.
G. Ostrov
The High Court of England and Wales has issued an official warning to legal professionals about the serious consequences of improperly using artificial intelligence technologies in judicial practice. The warning followed the discovery of several high-profile cases where lawyers presented courts with references to completely fabricated court decisions and cited non-existent rulings.
Discovered Violations
Judges Victoria Sharp, President of the King's Bench Division, and Jeremy Johnson reviewed two illustrative cases of AI-generated information use in court documents. In the first case, the plaintiff along with their lawyer openly admitted to preparing lawsuit materials against two banking institutions using artificial intelligence tools actively.
Detailed examination of court materials revealed shocking results: out of 45 presented references to case law, 18 proved to be completely fabricated. The case was closed last month with serious consequences for the process participants.
In the second case, concluded in April of this year, a lawyer representing a client's interests in a dispute with local administration over housing issues could not provide explanations for the origin of five cited case law examples.
Official Court Position
"There can be serious consequences for justice and trust in the system if AI is used incorrectly," stated Judge Sharp in her official conclusion. She emphasized that lawyers could face criminal prosecution or be stripped of their right to practice for providing knowingly false data created by artificial intelligence.
The judge particularly noted that popular AI tools, including ChatGPT, "are not capable of conducting reliable legal research" and may generate confidently worded but completely false statements or references to non-existent case law sources.
Details of Specific Cases
In one of the reviewed cases, a man demanded compensation worth several million for allegedly violated contract terms by banks. Later, the plaintiff himself admitted that he formed case law references through AI tools and various internet resources, unconditionally believing in the authenticity of the provided materials. His lawyer stated that he relied on the client's research and did not verify the information independently.
In another case, a lawyer represented the interests of a person who was evicted from a home in London and needed housing provision. She also used AI-generated references but could not explain their origin. The court suspected AI use due to characteristic American spelling of words and template-style text presentation.
AI Error Statistics
Research by Silicon Valley company Vectara, conducted since 2023, shows that even the most advanced chatbots make errors in 0.7ā2.2% of cases. Moreover, the level of so-called "hallucinations" sharply increases when systems are required to generate large texts from scratch. OpenAI recently reported that their new models err in 51ā79% of cases when answering general questions.
International Practice
Judge Sharp cited examples from the USA, Australia, Canada, and New Zealand where artificial intelligence incorrectly interpreted laws or created fictional quotes. Despite recognizing AI as a powerful tool, she emphasized that its use comes with serious risks to justice.
Official website of the High Court of England and Wales: https://www.judiciary.uk
In case of any problems, contact us, we will help quickly and professionally!