When AI Makes Up the Law: Truth, trust, and the lawyer's duty to verify

GenAI speeds drafting, but confident falsehoods can sink a case-and a career. Courts are clear: verify every cite, disclose AI use, and keep judgment and duty firmly human.

Categorized in: AI News Legal
Published on: Nov 23, 2025
When AI Makes Up the Law: Truth, trust, and the lawyer's duty to verify

Truth, Trust, and Technology: The Legal Profession in the Age of AI Hallucinations

Generative AI has moved from novelty to daily tool. Models like ChatGPT, Gemini, and Grok are now used by millions for drafting, research, and decision support. According to OpenAI, mid-2025 saw more than 700 million weekly active ChatGPT users. Productivity is up-but so are the risks.

The core risk for law isn't style or tone. It's false facts stated with confidence. AI can produce case citations, quotes, and "precedents" that don't exist. That is a direct hit to the heart of legal work: truth, verification, and duty to the court.

Hallucination, Confabulation, or Fabrication?

Researchers argue over labels, but the behavior is clear: an AI generates information that is wrong, unsupported, or invented, and presents it as true. Some call it "hallucination," others prefer "confabulation" (convincing gap-filling) or "fabrication."

Whatever the term, the pattern is the same-confident output without verification. For lawyers, that's a liability, not a shortcut.

Why Distorted Information Hits Law Hard

Law runs on retrieval, precision, and source integrity. We already use tech-from LexisNexis to Westlaw-for speed and coverage. GenAI adds speed for drafting, summarizing, and brainstorming arguments.

But speed without verification invites damage: fake citations in pleadings, incorrect quotes, and misleading authorities stitched together by a model that doesn't "know," it predicts. That puts reputations, clients, and cases at risk.

What Courts Are Saying

In 2023, the Southern District of New York addressed fake case citations produced via ChatGPT in Mata v. Avianca Inc. The court did not say using AI is improper per se, but made the gatekeeping duty crystal clear: lawyers must ensure the accuracy of their filings. The court noted the cost of copying AI errors-wasted time, reputational harm, and cynicism about the profession.

In 2025, the Eastern District of Oklahoma reached a similar bottom line. The court emphasized that technology can produce words, but only lawyers bring belief, responsibility, and judgment. Fabricated cases, fictitious quotations, and misleading statements were flagged as unethical uses of GenAI.

Both courts issued monetary sanctions tied to counsel's failure to verify and to disclose reliance on AI. One court expressly grounded its analysis in Rule 11(b) of the Federal Rules of Civil Procedure, noting that a "reasonable inquiry" cannot be delegated to software. The court also required amended pleadings plus a signed certification that each citation and statute was verified.

Read Rule 11(b) (Cornell LII)

India: Early Warnings

Indian High Courts have already confronted petitions citing case laws that never existed. Judges have urged advocates to verify AI-generated content with primary sources. One global tracker lists hundreds of AI-hallucination-related matters, and that number is likely to rise.

A Practical Protocol For Legal Teams

  • Never trust, always verify: treat AI outputs as drafts. Confirm every case, quote, and statute in primary sources.
  • Use citators every time: KeyCite/Shepardize before a citation hits a draft sent to a client or the court.
  • No blind citations: do not paste AI-supplied citations without opening and reading the underlying authority.
  • Quote discipline: cross-check every block quote against the reporter or official PDF. If you can't verify, don't use it.
  • Human certification: attach a short verification certificate for filings stating that each citation and statute was personally checked.
  • Disclosure policy: if AI assisted in drafting, follow court rules and your firm policy on disclosure.
  • Limit the task: let AI help with outlines, summaries, and style edits-but keep citations and authorities under strict human control.
  • Use retrieval with guardrails: prefer tools that cite sources and allow you to click through to the underlying document.
  • Maintain an AI log: record prompts, tools used, and verification steps for auditability and client communication.
  • Second set of eyes: for any AI-assisted draft, require human review by someone who did not run the initial prompt.

Firm Policy, Training, and Deterrence

  • Policy: issue a short, clear AI-use policy covering disclosure, verification, data handling, and sanctions for violations.
  • Matter intake: set expectations with clients on how AI may be used and how it will be supervised.
  • Training: teach lawyers and staff how AI fails, not just how it helps. Verification is a skill, not a checkbox.
  • Regulatory action: Bar Councils and courts should issue model rules with real consequences to create deterrence.

Sample Verification Certificate (Short Form)

Certification: I certify that each case citation, statutory reference, and quoted passage in this filing has been personally reviewed against the primary source, and that any AI tool used in drafting did not substitute for my professional judgment or a reasonable inquiry.

Bottom Line

AI can draft fast. It cannot carry your license. Truthful filings, due diligence, and clear judgment remain human duties.

As Justice H. R. Khanna reminded us: eternal vigilance is the price of liberty. In this era, it's also the price of using AI responsibly.

Further Resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide