Bar Associations Give Lawyers Flawed Guidance on AI Verification
The American Bar Association and Mississippi State Bar have issued ethics opinions on generative AI that contain significant gaps. Both organizations tell lawyers to use AI tools if they verify outputs, obtain informed consent, and protect confidentiality. But the specifics of how to verify-and when verification can be reduced-create professional risk.
Mississippi Ethics Opinion No. 267 instructs lawyers to "trust but verify" generative AI outputs. That framing is the problem.
Lawyers should not begin from a posture of trust toward tools known to fabricate legal citations, invent quotations, and misstate case holdings. The ethical default should be the opposite: assume output is unverified until independently confirmed.
The Problem With "Legal-Specific" Tools
Both the ABA and Mississippi Bar say that lawyers using AI tools designed specifically for legal work-such as legal research or contract review-may require "less independent verification or review" if the lawyer has prior experience with the tool. This language appears verbatim in ABA Formal Opinion 512, issued in July 2024.
That guidance is wrong in practice. A lawyer conducting legal research with a legal-specific AI tool, having used it before without seeing fabricated cases, may reduce verification under these opinions. The result: fake citations filed in court, joining a growing list of sanctions cases.
Prior personal experience is a weak foundation for reduced review. Tools change constantly. Models update. Interfaces redesign. Guardrails shift. A lawyer's experience with a tool last week does not predict its reliability today.
The ABA opinion itself acknowledges this contradiction. It describes generative AI as "a rapidly moving target" with features and utility "quickly changing and will continue to change in ways that may be difficult or impossible to anticipate." Yet it still suggests prior experience supports less verification.
The ABA cites its own authority undermining this advice: a Stanford study found that leading legal research companies' AI systems "hallucinate between 17% and 33% of the time."
Contract Review and Missing Details
ABA Formal Opinion 512 also provides incomplete guidance on contract review. It says a lawyer using AI to summarize numerous contracts may skip manual review of the full set if the lawyer first tests the tool on a smaller sample and finds accurate results.
The opinion does not address critical conditions: risk level of the matter, importance of the contracts, consequences of missed provisions, representativeness of the sample, or quality-control measures for high-risk documents. It also ignores that AI systems change between initial testing and actual use.
Better guidance would be specific: subset testing may support limited triage or first-pass review. But any reliance on AI-generated summaries should be recent, matter-specific, risk-calibrated, and tied to the actual tool, workflow, and documents being used.
What Lawyers Should Actually Do
The correct professional rule is straightforward: lawyers may use generative AI and LLM tools, but must verify outputs according to risk. Citations, quotations, holdings, and case analysis require independent verification. Prior experience with a tool may inform how a lawyer uses it, but cannot justify reduced verification in a field where the technology changes regularly.
When an AI for legal output may affect legal advice, court filings, factual representations, or client rights, treat it as unverified until confirmed through independent professional judgment.
Bar associations were right to address generative AI. Lawyers need guidance. But that guidance must be precise, must not frame trust as the starting point, and must not suggest that legal-specific tools or prior user experience permit reduced verification in a technology environment that changes by the week.
Your membership also unlocks: