AI in Law Firms: New Compliance Challenges and Practical Steps for Risk Teams
The SRA approved the UK’s first AI-driven law firm with strict compliance conditions. Legal firms must update risk frameworks and ensure transparency as AI use grows.

Firms Must Assess AI’s Impact on Compliance Obligations
Last month marked a key development for the legal sector when the Solicitors Regulation Authority (SRA) authorised Garfield.law, the UK’s first AI-driven law firm. This move opens new opportunities but also raises complex compliance questions for traditional legal practices.
Garfield.law focuses on small-claims debt recovery using a large language model capable of managing the entire small-claim track debt claim process. The SRA’s approval came with strict conditions: the AI system cannot propose case law — a safeguard against AI errors known as “hallucinations” — and it cannot act without client consent at every stage. This regulatory stance offers valuable clues on how the SRA may regulate AI tools in legal services going forward.
Compliance Challenges for Traditional Firms
Legal practices must now carefully consider how incorporating AI affects their compliance duties. Firms should update risk assessment frameworks to identify and manage AI-specific risks, especially regarding data protection, client confidentiality, and ensuring adequate human oversight.
The SRA’s close monitoring of Garfield.law signals that regulatory bodies will scrutinise the use of AI more intensively. Compliance officers should prepare for detailed inquiries about any AI tools in use, their role in delivering legal services, and the safeguards put in place to prevent errors or misconduct.
SRA Rule 3.2 emphasizes that solicitors must provide competent and timely services. As AI becomes more integrated, this requirement extends to understanding the capabilities and limitations of AI systems. Compliance teams will need to show they have the necessary technical competence when deploying these tools.
Financial Considerations
The financial impact of AI adoption includes upfront costs for implementation and training, ongoing expenses for monitoring and quality control, and investments in compliance infrastructure to meet regulatory standards. While AI may reduce costs in routine tasks and document preparation, it could also require adjustments to fee structures.
Transparency about AI use in legal services and how it influences client charges is essential under the SRA’s transparency rules. Compliance teams must ensure clear communication with clients regarding AI’s role in service delivery.
Practical Steps for Compliance Teams
To prepare for AI’s impact, risk and compliance teams should consider the following actions:
- Create an AI governance framework: Establish clear accountability, transparency, and risk management protocols around AI use.
- Update policies and procedures: Incorporate AI-specific risks and guidelines into existing compliance documentation.
- Staff training: Provide education on ethical AI use, recognising AI’s limitations, and maintaining human oversight.
- Client communication: Ensure client-care materials disclose when AI tools are involved in service delivery.
- Insurance coverage: Confirm that professional indemnity insurance covers AI-related risks adequately.
The SRA’s authorisation of Garfield.law marks the start of a new compliance phase for the legal profession. Even if firms do not immediately adopt similar AI systems, they should begin updating compliance strategies now to align with emerging regulatory expectations.
For legal professionals seeking to deepen their knowledge of AI applications and compliance, resources such as Complete AI Training's latest AI courses offer practical guidance and up-to-date insights.