Law Society Backs AI Under Current Laws, Calls for Clarity and Safeguards

Ministers tout sandboxes and exemptions to speed AI in law, but the profession says the rules already work. Clarity on data, liability, and oversight-plus firm safeguards.

Categorized in: AI News Legal
Published on: Jan 07, 2026
Law Society Backs AI Under Current Laws, Calls for Clarity and Safeguards

AI in legal services: current laws work - clarity is missing

Ministers want to loosen rules to speed up AI adoption. The Law Society says the law already covers what matters; lawyers just need certainty on how it applies. The message is simple: don't waive protections, explain the rules and how to follow them.

What's on the table

The Department for Science, Innovation & Technology (DSIT) has floated an "AI Growth Lab" with time-limited regulatory exemptions to push autonomous tools into production faster. The pitch is that many rules predate machine-led decisions and slow down deployment, with a potential £140bn boost by 2030 if the UK moves faster. Legal services are singled out as a big prize if "unnecessary legal barriers" are removed.

The profession isn't asking for a free pass. It's asking for clarity.

What the profession is actually saying

Two-thirds of lawyers already use AI in some form. The brake isn't the rulebook; it's uncertainty around liability, data use, and oversight. As Ian Jeffery, CEO of The Law Society, put it: "AI innovation is vital for the legal sector and already has great momentum. The existing legal regulatory framework supports progress. The main challenges don't stem from regulatory burdens, but rather from uncertainty, cost, data and skills associated with AI adoption."

Where firms need clear answers

  • Client data: Must data be anonymised or pseudonymised before use in third-party or cloud AI tools? What are minimum standards for encryption, retention, and location of storage?
  • Liability: If an AI tool contributes to harmful advice, who is responsible - the solicitor, the firm, the vendor, or the insurer? How should this be reflected in engagement terms and insurer disclosures?
  • Supervision: When is direct human oversight required? Do firms need documented sign-off for each AI-assisted step, or is policy-level supervision enough?
  • Reserved legal activities: For advocacy, conveyancing, and probate, what level of AI assistance is acceptable without breaching professional duties?

A practical roadmap for firms

  • Adopt an AI usage policy that covers purpose, approved tools, data handling, human-in-the-loop thresholds, and audit logging.
  • Run a data protection impact assessment for any AI workflow touching client or confidential data. Define anonymisation standards and storage locations.
  • Map liability: update terms of business, panel vendor contracts, and professional indemnity notifications to reflect AI-assisted work.
  • Set supervision rules: define review gates for AI-generated outputs, especially in reserved activities, and record sign-offs.
  • Perform vendor due diligence: security certifications, model provenance, error rates, update cadence, and indemnities.
  • Train your team on safe prompts, red-teaming, and escalation when outputs look uncertain or risky.

What regulators should publish next

  • Definitive guidance on data use: anonymisation expectations, cross-border processing, retention, and model training restrictions.
  • A liability and accountability framework: who carries what risk across solicitor, firm, vendor, and insurer - with examples.
  • Minimum security standards for AI vendors serving legal work, plus model audit and logging requirements.
  • Clear supervision standards for AI across reserved activities, including acceptable use cases and prohibited ones.
  • Templates: client disclosures for AI-assisted work, procurement due diligence checklists, and incident reporting formats.

Safeguards are non-negotiable

Government says any sandbox will include red lines to protect rights and safety. The Law Society supports a legal services sandbox only if it upholds - not sidesteps - professional standards. "Technological progress in the legal sector should not expose clients or consumers to unregulated risks," Jeffery said. "Current regulation of the profession reflects the safeguards that Parliament deemed vital to protect clients and the public. It ensures trust in the English and Welsh legal system worldwide."

Or, as Jeffery added: "The Law Society strongly supports innovation provided it remains aligned with professional integrity and operates in a solid regulatory environment. The government must work with legal regulators and bodies to ensure adherence to the sector's professional standards. Any legal regulatory changes must include parliamentary oversight."

Bottom line

We don't need weaker rules to use AI well in legal services. We need clear guidance, consistent supervision, and accountability that clients can trust. Build the standards, publish the guardrails, and adoption will follow.

Useful references

Looking to upskill your legal team on AI

If you need structured, practical training for legal workflows, see curated options by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide