Rethinking UPL: Let AI Legal Tools Close the Justice Gap

AI can widen access to legal help, but UPL fears are icing out tools people need. Use transparency, trials, and data-regulate proven harm, not hypotheticals.

Categorized in: AI News Legal
Published on: Dec 02, 2025
Rethinking UPL: Let AI Legal Tools Close the Justice Gap

AI for Justice: Scaling legal help without choking innovation

AI is changing how everyday people get legal help. Yet fear around unauthorized practice of law (UPL) is freezing tools that could close the justice gap. Meanwhile, 92% of low-income people receive no or insufficient legal help. Protection matters - protectionism does not.

Key insights

  • Regulation is drifting into restraint: Overbroad or premature rules risk protecting a business model over consumers and shrinking legal help in the process.
  • Safeguards already exist: Consumer protection and product liability laws give recourse for harmful tools; more bans aren't automatically better.
  • Balance beats blanket bans: Start with experimentation, education, transparency, and data - regulate proven harm, not hypothetical harm.

What problem are we actually solving?

UPL is meant to prevent nonlawyers from giving legal advice. Yet general-purpose AI from Big Tech fields untold legal questions daily. As Damien Riehl puts it: if applying law to facts is "legal advice," are we prosecuting that at scale - or only when a smaller legal tech vendor does it using the same models?

If we won't apply UPL equally, we're not protecting the public - we're constraining competition. Consumers pay the price in fewer, slower, pricier options.

Why premature regulation backfires

Rules that pre-clear or ban classes of AI tools up front act like prior restraint. They limit supply before evidence of harm. AI also moves faster than rulemaking cycles, so by the time a rule lands, the tech has shifted and the rule chills the good along with the bad.

A smarter approach treats AI legal products like any other product: monitor outcomes, investigate alleged harm, and apply existing liability and consumer protection remedies. Then regulate specific, demonstrated risks.

Safeguards that already exist

We already have strong protections. Consumer protection statutes, unfair/deceptive practices enforcement, and product liability create accountability for false claims, unsafe features, and negligent design. State attorneys general can and do take action when tools hurt consumers.

Shift the burden: ask regulators to show data-backed harm and whether a rule would constrain supply in a market already failing the majority. That keeps the focus on public outcomes, not professional turf.

A data-first, phased approach

The Institute for the Advancement of the American Legal System (IAALS) recommends phased oversight: start with experimentation, education, transparency, and measurement, then calibrate regulation based on results. That lets helpful tools reach people while risks are mapped and contained.

If data later shows issues - e.g., privacy, security, or reliability - adopt focused requirements such as disclosure standards, model governance, or certification for specific risk categories.

Learn more about IAALS' work

States showing momentum

Colorado published a non-prosecution policy for AI tools aimed at improving access to justice. The principles are simple: be clear about what the tool does and doesn't do, educate consumers on limits and risks, and include a lawyer in the loop where appropriate. Utah, Washington, and Minnesota have explored similar paths.

Texas amended its UPL statute to exclude tech products from UPL enforcement when they include clear disclosures that they are not a substitute for a licensed lawyer. Clear guardrails like these promote innovation and protect consumers at the same time.

IAALS and Duke University's Center on Law & Tech are collaborating on materials to help states implement non-prosecution or similar policies. See Duke's Center on Law & Tech

Practical guardrails for AI legal tools

  • Plain-language disclosures: scope, limitations, and when to see a lawyer.
  • Consumer education: highlight risks of relying on nonlawyer advice.
  • Lawyer-in-the-loop for higher-stakes use cases or complex decisions.
  • Evidence standards: accuracy testing, versioning, and audit trails.
  • Safety practices: privacy-by-design, security reviews, and data minimization.
  • Redress: clear complaints process and insurance or reserves for claims.
  • Marketing truthfulness: no overpromising, no implied attorney-client relationship.

What legal leaders and regulators can do now

  • Justice tech founders: publish disclosures, publish accuracy metrics, and invite third-party evaluation. Log errors and outcomes; ship updates that show measurable risk reduction.
  • Bars and regulators: make non-prosecution policies public where you have them; open sandboxes for supervised pilots; require transparent consumer notices instead of blanket restrictions.
  • Courts and legal aid: trial limited-scope AI tools for form-filling, triage, and referrals; capture data on resolution rates, time-to-outcome, and user satisfaction.
  • Law firms and in-house teams: set internal guidelines for AI use, review outputs, and document decision points. Use these tools to widen the front door for modest-means clients.

What's next

The current system isn't delivering equal justice. AI-enabled tools can ease that pain if we let them compete on outcomes and hold them to account with data. Regulate what proves harmful. Encourage what measurably helps.

That's how we protect consumers, support overburdened courts and legal aid, and give ethical builders the room to deliver practical help at scale.

If you want structured learning to upskill your team on practical AI methods, see this curated list of programs by role: AI courses by job


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide