Bluebook's Rule 18.3 on AI Citations Sparks Legal Backlash

Bluebook Rule 18.3 seeks to standardize AI citations, but lawyers call it confusing, burdensome, and risky. Treat AI as a tool: cite sources, disclose only when required.

Categorized in: AI News Legal
Published on: Sep 18, 2025
Bluebook's Rule 18.3 on AI Citations Sparks Legal Backlash

Bluebook Rule 18.3 on AI: Why Lawyers Are Pushing Back (and What to Do Now)

The Bluebook's 22nd edition added Rule 18.3, its first attempt to standardize citations to generative AI. It was meant to help. Instead, it triggered sharp criticism from legal scholars and practitioners who say it misses the point of legal citation and misunderstands how AI is actually used.

The core objection is simple: AI is a tool, not an authority. Treating it like a source to be cited - especially with exact prompts and archived PDFs - creates confusion, technical overhead, and ethical risks.

What Rule 18.3 Actually Requires

Rule 18.3 splits AI usage into three buckets, each with different citation elements and a requirement to save a screenshot of the AI output as a PDF.

  • 18.3(a) Large language models (text output): Include the author of the prompt; the model name and version; the exact prompt in quotation marks; the date submitted; and a parenthetical noting where the PDF is stored.
  • 18.3(b) Search results (AI-powered search): Include the search engine name; the exact query in quotation marks; number of results (if available); the date of the search; and a parenthetical with PDF storage location.
  • 18.3(c) AI-generated non-text content: Follow the relevant content subrule; if an author is required, use the prompter's name (or omit if unknown); add a parenthetical indicating it was AI-generated and the model used.

Critics note even this structure is confusing. As Cullen O'Keefe asks: What's the difference between "large language models" and "AI-generated content," when LLM output is itself AI-generated content?

What's Broken - According to Legal Scholars

1) The rule never answers "When should AI be cited?" Jessica R. Gunder argues the rule explains how to cite AI but not whether to cite AI as a source or disclose it as a tool. If AI helped find sources or draft language, cite the sources. You wouldn't cite a research assistant.

2) Internal inconsistencies. O'Keefe points out examples in The Bluebook that don't mirror its own requirements (e.g., missing quoted prompts; inconsistent inclusion of company names like OpenAI). That defeats the goal of uniformity.

3) Unreasonable technical burden. Requiring screenshots, scrolling captures, and PDFs assumes a level of tech skill many lawyers and students do not have. As Gunder and Jayne Woods note, even creating bookmarked PDFs with live links is out of reach for most legal users.

4) Incompatible with real AI workflows. Effective prompting is iterative: uploads, refinements, multiple drafts. The "exact prompt" that produced the final text might be trivial compared to the prior exchanges, or the full thread is too long to cite. Either way, the citation misleads or becomes unworkable.

Ethical Landmines

Requiring prompts in citations risks exposing confidential case facts and attorney thought process. That collides with confidentiality and work-product protections.

Prompts often contain both. If Rule 18.3 is read to require disclosure across the board, it can push lawyers toward violating duties.

"Don't Cite AI - Disclose It"

Several scholars argue for a clearer distinction. Susan Tanner's position: in 99% of cases, don't cite AI at all; cite the verified authorities AI helped you locate. Where the AI output itself is the evidence (e.g., documenting what the tool said), make that purpose explicit.

Example she offers: OpenAI, ChatGPT-4, "Explain the hearsay rule in Kentucky" (Oct. 30, 2024) (conversational artifact on file with author) (not cited for accuracy of content).

Why This Matters for Practice and Academia

Law schools now have to teach Rule 18.3 while drawing a firm line around when citation is appropriate versus when simple disclosure (if any) is enough. Law reviews must set policies, especially on whether to require storage of AI transcripts or PDFs.

Practitioners face a patchwork of court orders on AI disclosure. Jurisdictions like Florida that adopt The Bluebook as official authority may see uneven enforcement and confusion.

Practical Guidance You Can Use Now

  • Set a firm policy: Cite sources, not tools. Disclose AI use only when required by court order, client agreement, or journal policy.
  • Define "cite" vs. "document": Cite AI only when the AI output itself is the object of discussion (e.g., to show what the tool said). Use parentheticals to clarify it's not cited for truth.
  • Protect confidentiality: Strip prompts of identifying facts. Use anonymization or summaries if disclosure is compelled. Confirm vendor confidentiality terms before uploading any client material.
  • Preserve work product: Keep iterative prompts and edits out of the public record unless required. Maintain internal logs in secure systems if policy demands retention.
  • Simplify the record: If a court or journal requires saving outputs, store a clean PDF of the final AI response, plus a short description of the prompt's purpose - not a full transcript - unless the rule says otherwise.
  • Train your teams: Standardize templates for AI disclosures, teach redaction and PDF creation, and audit for compliance before filing.
  • Watch local rules: Track standing orders on AI use and disclosure. Align your policy to the most restrictive forum you practice in.

What The Editors Should Fix

  • State clearly when AI should be cited versus disclosed - and when neither is appropriate.
  • Remove internal contradictions (prompt quotation, model/company naming) and align examples with the rule text.
  • Offer a workable standard for iterative chats (e.g., cite the final output with a short provenance note, not a transcript).
  • Address confidentiality and work-product risks explicitly, with safe-harbor formats.
  • Reduce technical overhead (no scrolling screenshots; allow text exports with hash verification).

Bottom Line

Rule 18.3 tries to keep pace with AI, but critics call it confusing, burdensome, and risky. Treat AI as a tool. Cite the authorities you relied on, disclose AI use only when it's material or required, and protect client confidences and work product at every step.

If you need structured training for staff on safe, efficient AI use in legal work, see the curated options here: AI courses by job.