Agentic AI follows GenAI adoption curve in legal but raises new oversight concerns, Thomson Reuters report finds

Under 20% of law firms use agentic AI now, but half are planning to adopt it soon. Oversight concerns top the hesitation list, as lawyers worry about ceding too much control to autonomous systems.

Categorized in: AI News Legal
Published on: Apr 10, 2026
Agentic AI follows GenAI adoption curve in legal but raises new oversight concerns, Thomson Reuters report finds

Agentic AI adoption in legal follows GenAI's trajectory, but oversight questions loom

Less than 20% of law firms and corporate legal departments currently use agentic AI systems, but roughly half are planning or considering adoption in the near future, according to a new report from the Thomson Reuters Institute. The pattern mirrors GenAI adoption two years ago, when only 14% of legal organizations had enterprise-wide tools in place.

Agentic AI operates differently from the generative AI systems already widespread in legal practice. Instead of back-and-forth prompting and reviewing, agentic systems work autonomously, completing multi-step tasks with human input only at predetermined checkpoints. A system might independently research regulations, draft documents, identify risks, and revise work - all with minimal lawyer intervention.

Legal professionals express cautious optimism. Fifty-one percent said they feel excited or hopeful about agentic AI, while 19% expressed concern. About 47% believe agentic AI should be used for legal work, compared to 22% opposed.

Autonomy creates new barriers

The very feature that makes agentic AI efficient - its autonomy - is what makes lawyers nervous. The second most common reason legal professionals hesitate about agentic adoption involves monitoring and oversight concerns.

One lawyer from a US firm said: "Agentic AI, while exciting, to me removes oversight a step too far. I like the idea of prompting and reviewing a result. It is something else to have a machine have so much autonomy in the actual doing of a thing and potentially acting on my behalf without that very concrete review."

An assistant general counsel raised related concerns about hidden processes. "The fact that agentic AI operates in a much more autonomous way, with a lack of control from the user, means there are many unknowns that are hidden beneath the process," they said.

Education and human-in-the-loop are critical

Legal organizations planning agentic AI implementation need to rethink AI training programs. Generic education won't work - lawyers need explicit guidance on where human oversight must occur and what their role is in maintaining ethical standards.

Autonomy does not mean full autonomy. Lawyers have ethical duties to their work product, which means lawyer oversight remains necessary in any agentic system. Organizations that clearly communicate this human-in-the-loop requirement upfront will see more reliable adoption.

Currently, less than 20% of lawyers say their organizations measure AI's return on investment, and most corporate lawyers don't know how outside firms approach AI. Building clear AI strategy alongside new tools remains a top priority for 2026.

Legal adoption of agentic AI will likely accelerate over the next three to five years as software providers integrate these systems into their platforms. But that adoption depends entirely on whether firms can answer one question clearly: what does the lawyer actually do?

Learn more about AI for Legal and AI Agents & Automation to understand how these systems work and where human judgment remains essential.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)