Teach AI or Risk Malpractice: A Wake-Up Call for Law Schools

Law schools lag on AI, leaving new lawyers exposed to errors, sanctions, and weaker client service. Make AI competence core: teach verification, citation discipline, and ethics.

Categorized in: AI News Legal
Published on: Oct 18, 2025
Teach AI or Risk Malpractice: A Wake-Up Call for Law Schools

The Missing Course in Law School: AI Competence

In a few short years, the legal profession moved from skepticism about AI to widespread use. Yet many law schools still don't teach it. That's not a quirky omission. It's a failure to educate.

For research and drafting, AI can outperform humans on speed and sometimes on quality. But the tools make serious mistakes-and they do it with confidence. If graduates don't get structured AI training, they're walking into ethical risk, malpractice exposure, and weaker client service.

Competence now includes AI

Clients expect efficiency. Firms are adopting AI across research, drafting, and e-discovery. Competence isn't optional.

ABA Model Rule 1.1 is clear: lawyers must keep current with changes in law and practice, including the benefits and risks of relevant technology. See the ABA's language on tech competence here: Model Rule 1.1.

The risk is real-and already here

Since 2023, courts have been flooded with over 280 filings containing hallucinated AI citations. One attorney was sanctioned in Wyoming federal court for fake citations; local counsel was fined for signing the pleadings. Consequences aren't limited to the person who typed the prompt-signers are on the hook.

Another attorney at a top firm started from a real journal article but used an AI tool to draft a citation. The system invented a title and authors, and the error went straight into a court filing. That's a new kind of mistake: a true source corrupted into a false citation.

Even the vendors say "verify." OpenAI reported that an earlier o3 model hallucinated 33% of the time, and a prior flagship model (ChatGPT-4.5) hallucinated on factual questions 37.1% of the time. ChatGPT remains the most used tool among lawyers, while legal research platforms still add disclaimers to AI outputs.

Law schools can fix this-fast

Talking about AI in a seminar won't cut it. Students need hands-on training paired with ethics and technical foundations. Teach how large language models work, why hallucinations happen, and where the limits are.

Just as schools train with Bloomberg Law, Lexis, and Westlaw, they must add AI research and drafting to the core toolkit. Graduates who lack this training will fall behind and expose clients to avoidable risk.

What to teach: a practical core

  • Verification by default: Every AI-assisted claim must be checked with primary sources and validated with Shepardize/KeyCite or equivalents.
  • Citation discipline: No citation enters a draft without a human-opened source, correct title, author, and pinpoint. Screenshots or PDFs saved to the file.
  • Source-first workflows: Start from the record, governing law, and trusted databases. Use AI to accelerate summaries and comparisons, not to originate legal authority.
  • AI use disclosures: When required by court rules, client policies, or firm policy, disclose AI assistance and the verification steps taken.
  • Confidentiality and privilege: Know what can and cannot be sent to external systems. Use approved tools and safe modes; strip or mask confidential data.
  • Drafting with constraints: Require citations, quotes, and docket references embedded in outputs. No cites, no use.
  • E-discovery fluency: Use AI features to triage, cluster, and surface hot docs, with human review for privilege and responsiveness.
  • Co-counsel risk: Set clear rules for shared work. You're responsible for what you sign.

Hands-on labs that build judgment

  • Hallucination drills: Give students prompts known to trigger bad cites. Grade on detection and correction, not speed.
  • Parallel research: Research a question with AI and with traditional tools. Compare outputs, cost, and error rates.
  • Fact-to-law mapping: Feed only facts to the tool; students must add the law. This prevents authority from appearing out of thin air.
  • Red-team assignments: Students try to break the tool (ambiguous queries, outdated law) and document failure modes.
  • Policy build-out: Teams draft an "AI use SOP" for a mock clinic: approved tools, logging, verification, disclosure, and data security.

Guardrails that prevent bad filings

  • Always link back: Require URLs, citations with reporter info, and docket numbers. If a tool can't show a source, treat the text as unverified notes.
  • Two-layer review: First pass: check cites and quotes against originals. Second pass: check legal relevance and fit to the record.
  • Currency checks: Enforce "good law" validation before any filing-overruled, distinguished, or negative treatment flagged in the memo.
  • Version control: Save prompts, outputs, and verification artifacts to the matter file for accountability.

"The tools aren't good enough yet" isn't a reason to wait

Firms already use them. Courts already sanction misuse. Law schools don't control what tools graduates will face; they control whether graduates can use them responsibly.

If a system falls short, teach why. Show the failure rate, the patterns, and the fixes. That is what competence looks like.

A 90-day action plan for deans and faculty

  • Week 1-2: Adopt a school-wide AI policy (approved tools, data rules, disclosure, verification).
  • Week 3-6: Build three mandatory labs: research, drafting, and e-discovery. Include one graded hallucination drill.
  • Week 7-8: Integrate cite-check SOPs into LRW. Require human-opened sources for every AI-assisted citation.
  • Week 9-10: Add an ethics module tied to Model Rule 1.1, sanctions case studies, and client disclosure scenarios.
  • Week 11-12: Run a simulated court rule requiring AI disclosure and verification logs. Grade for completeness and judgment.

Bottom line

AI is already changing standards of diligence and competence. The misuse we've seen-fabricated citations, corrupted sources, missed verifications-is preventable with training.

Every graduating class without real AI education leaves school with a gap that clients will notice. Teach students to use the tools, test their limits, and verify everything. That's how you protect clients and strengthen the profession.

Want structured, practical AI curricula for legal teams? Explore course options here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)