AI in Legal Practice: The Risks You Cannot Afford to Ignore
The Upper Tribunal has issued a stark warning to the legal profession following two cases in which solicitors placed client documents into open-source AI tools and cited fictitious case references before the court. The message from Judge Fiona Lindsley could not be clearer, using tools such as ChatGPT with client information is a data breach, a breach of client confidentiality, and a waiver of legal privilege. For any firm that has not yet taken AI governance seriously, the time to act is now.
What Happened
Two separate matters came before the Upper Tribunal's Immigration and Asylum Chamber, both involving the citation of case references that did not exist.
The resulting hearings exposed not just the dangers of AI hallucination in legal research, but a deeper failure of supervision, process, and understanding within the firms concerned.
In the first case, a solicitor admitted uploading client emails and Home Office decision letters to ChatGPT to help improve and summarise them. He acknowledged, only when pressed, that this constituted a data breach and that he would need to self-report to both the Immigration Advice Authority and the Solicitors Regulation Authority. The tribunal noted it would have made those referrals itself had he not already done so.
The second case revealed a more systemic failure. Grounds for judicial review had been drafted by a very junior caseworker, not a qualified trainee as initially described, and had not been properly checked by the supervising solicitor. The senior solicitor's claim that there was "no mechanism" for staff to use AI at his firm was dismissed by the tribunal, which pointed out that anyone with access to Google has access to AI.
Both cases were referred to the SRA.
Why This Matters Beyond These Two Firms
Judge Lindsley noted a "considerable increase" in the citation of fictitious authorities in the latter half of 2025. This is not an isolated problem confined to immigration law. It is a profession-wide issue, and regulators and tribunals are paying close attention.
The Upper Tribunal has already updated its forms to require legal representatives to confirm by a statement of truth that any cited authority exists, can be located using the citation provided, and supports the proposition for which it is cited. Other courts and tribunals are likely to follow suit.
The consequences of getting this wrong go beyond regulatory referral. Reputations are damaged. Clients are harmed. Public confidence in the legal system is eroded. And as the tribunal observed, scarce judicial resources are wasted sending judges on what it described as a fool's errand.
Practical Steps Every Firm Should Take Now
1. Audit Your Team's AI Usage Today
Do not assume that because you have not sanctioned the use of AI tools, your staff are not using them. As the tribunal pointed out, access to Google means access to AI. Speak openly with your team, create a safe environment for people to disclose what tools they are using, and build an accurate picture of your current position before you can address it.
2. Implement a Clear AI Acceptable Use Policy
Every firm needs a written policy that sets out which AI tools are permitted, which are prohibited, and why. That policy should distinguish between closed-source tools that do not share data externally, such as Microsoft Copilot used within a properly configured organisational environment, and open-source tools such as ChatGPT, which must not be used with any client information, privileged material, or confidential data.
The policy should be signed off at partner level, communicated to all staff including support staff and paralegals, and reviewed regularly as the technology landscape evolves.
3. Never Input Client Data into Open-Source AI Tools
This is the non-negotiable rule. As Judge Lindsley explained, placing client information into an open-source AI tool is to place it in the public domain. That is a data breach. It waives legal privilege. It breaches your duty of confidentiality to the client. It may require notification to the ICO and it will require self-reporting to your regulator.
No efficiency gain justifies that risk. If your team is using AI to summarise documents or improve drafts, those tasks must be carried out only within approved, closed-source environments with appropriate data processing agreements in place.
4. Verify Every Case Citation Before It Leaves Your Office
AI tools can generate plausible-sounding but entirely fictitious case references. This is not a theoretical risk; it is happening with increasing frequency. Any document that cites legal authority must be checked by a qualified lawyer before it is filed or sent. That means locating the actual case, reading the relevant passage, and confirming that it supports the proposition for which it is cited.
Build this verification step into your workflow as a mandatory checkpoint, not an optional extra. No document citing authority should be filed without a second pair of qualified eyes having confirmed each reference.
5. Strengthen Your Supervision Framework
Both cases before the tribunal were ultimately failures of supervision. Junior and unqualified staff were producing legal documents that qualified solicitors either did not check or did not check properly. The tribunal was unequivocal: it does not matter how an error comes about. The qualified professional with conduct of the matter is responsible for the accuracy of what is filed.
Review your supervision arrangements across all fee-earning and casework staff. Ensure that anyone producing legal documents, drafting grounds of appeal, or conducting legal research understands the risks of using non-specialist AI tools, and that their work is reviewed by a qualified lawyer before it reaches a client or a court.
6. Train Your People Properly
A policy document that sits unread in a shared drive achieves nothing. Invest in proper training that explains what AI tools are, how they work, why the risks are real, and what your firm's rules require. That training should cover everyone, from partners and associates through to paralegals, trainees, and administrative staff who may have access to client files.
Refresher sessions should be scheduled regularly, particularly as new tools emerge and the regulatory position develops.
7. Establish a Reporting Culture
Staff who discover that an error has been made, whether a false citation has been included in a document or client data has been input into an inappropriate tool, need to feel able to raise it immediately without fear of disproportionate consequences. Early disclosure, as the first case demonstrated, can make a material difference to how regulators and tribunals respond.
Make clear to your team that prompt, honest reporting of mistakes is expected and supported, and that the greater risk lies in concealment.
A Word on Closed-Source AI Tools
The tribunal's ruling was not a blanket prohibition on AI. Judge Lindsley specifically noted that closed-source tools which do not place information in the public domain can be used for tasks such as summarisation without the confidentiality risks associated with open-source tools.
If your firm wishes to adopt AI in a meaningful way, the right approach is to work with your IT provider or a specialist consultant to identify suitable tools, ensure appropriate data processing agreements are in place, and configure those tools in a way that keeps client data within your controlled environment. That takes time and investment, but it is the only compliant route to harnessing AI's efficiency benefits in legal practice.
The Bigger Picture
The legal profession is at an inflection point with AI. The tools are powerful, they are widely accessible, and staff at every level are using them, whether firms know it or not. The question is not whether AI will be used in your practice, but whether it will be used safely, within a framework that protects your clients, your firm, and your regulatory standing.
The cases before the Upper Tribunal serve as a warning that the profession cannot afford to ignore. Build your framework now, before an avoidable mistake forces the issue.
This article is intended as general guidance and does not constitute legal advice.



Comments
Post a Comment