AI tools create verification duties and privilege risks for tax practitioners, study finds

Tax lawyers who submit AI-generated work without verifying it face sanctions under existing rules-no new regulations needed. The Fifth Circuit made that clear in February 2026.

Categorized in: AI News Legal
Published on: Apr 30, 2026
AI tools create verification duties and privilege risks for tax practitioners, study finds

Tax Lawyers Must Verify AI Output or Face Sanctions

Generative AI in tax practice creates a clear liability problem: lawyers cannot treat AI as a junior associate or delegate verification duties to algorithms. Courts and regulators are enforcing this standard now, not waiting for future guidance.

The Fifth Circuit sanctioned counsel in February 2026 for using AI to draft a brief without verifying accuracy and responding evasively when the court asked about it. The court declined to create a special AI rule because existing sanctions doctrines already require verification. The message was direct: there is no algorithm for accountability.

AI Is a Tool, Not an Agent

The mistake is treating AI like a supervised assistant. It is not. AI is prompted, not supervised. It does not learn from correction the way a human associate does. It has no duty of loyalty and cannot reason through legal problems.

If a calculator fails, it outputs "ERROR." If AI fails, it outputs a plausible-sounding falsehood. That functional difference demands heightened vigilance.

Under United States v. Boyle, a Supreme Court precedent on non-delegable duties, reliance on an agent does not excuse failure to meet statutory obligations. Verifying whether a legal citation exists is a ministerial function, not legal judgment. Delegating it to AI without independent confirmation is a failure of ordinary business care.

Federal Rule of Civil Procedure 11 and Tax Court Rule 33 impose a gatekeeping function on the signing attorney. Verification of authorities remains non-delegable.

The Confidentiality Problem

Most generative AI tools are cloud-based. Inputting client data into them transmits it to third-party servers. If the AI provider logs, stores, or reuses that data-even in anonymized form-privilege may be lost.

Many AI terms of service reserve the right to use prompts for product improvement. This destroys confidentiality. The legal consequence is waiver of attorney-client privilege, not merely an ethical breach.

The Kovel doctrine extends privilege to non-lawyers (such as accountants) employed to assist in legal advice. AI does not fit the model. Kovel requires an agency relationship and an expectation of confidentiality. AI systems lack agency, operate independently of direct attorney oversight, and-if using public models-lack confidentiality expectations.

If AI terms of service state that content improves services, the lawyer has disclosed the client's confidential tax strategy to a third-party commercial provider. That disclosure destroys privilege.

IRC ยง7216 imposes criminal penalties for unauthorized disclosure of tax return information. A practitioner who shares client income data or return structure with a public AI model may inadvertently trigger liability. Unlike traditional research tools, AI does not forget-it transforms inputs into permanent statistical weights.

What Regulators Require Now

Circular 230 ยง10.22 requires practitioners to exercise due diligence in determining correctness of representations made to the Treasury. The regulation assumes human communicative agency. An AI system is not a "person" capable of being supervised.

Treasury should amend Circular 230 to clarify that reliance on generative AI without human verification violates the duty of diligence. Submitting AI-generated content that has not been manually confirmed should be treated as a breach.

The Tax Court should follow the Northern District of Texas and require certification that citations and quotations have been verified against primary sources. This codifies what ethical rules already require. A certification requirement forces a pause before submission.

Competence Now Includes Spotting Hallucinations

Model Rule 1.1 imposes a personal duty of competence. With AI, that duty now includes the ability to identify hallucinations and understand where AI could fail.

Each lawyer must verify outputs before use. The test is not whether the tool was "generally reliable"-it is whether the individual practitioner can catch errors.

Firms should prohibit use of public AI models for client matters. All AI tools must be sandboxed behind confidentiality firewalls.

Contracts with any AI tool, cloud vendor, or third-party processor must include "zero retention" clauses and verification that no input is used for training.

For drafting, retrieval-augmented generation models tied to verified databases offer a more defensible foundation than open-ended tools. Open systems that hallucinate waste time and create liability.

Engagement Letters Must Disclose AI Use

Engagement letters should disclose whether AI is used, under what terms, and for which tasks. Clients should understand when automation is used and where human judgment begins. This aligns with Model Rule 1.4 and reduces misunderstanding risk.

No AI-generated work should reach a client, court, or agency without experienced human review. Verification must be routine where each cited source is reviewed in context and each quotation is matched against the original.

The human tax practitioner remains the gatekeeper.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)