Is Generative AI's 'Da Silva Moore' Moment Approaching?
In Da Silva Moore v. Publicis Groupe, Judge Andrew Peck approved predictive coding for discovery and gave legal teams cover to use technology at scale. Generative AI is nearing a similar inflection point. The question is whether your firm will be ready to defend its use the day a court signals clear acceptance.
What That Moment Would Look Like
- A published opinion explicitly approving a GenAI-assisted workflow (with human supervision and documented validation) for drafting, review, or discovery tasks.
- Standing orders or ESI protocols that reference GenAI, set disclosure expectations, and define reasonable controls.
- Courts applying proportionality and Rule 26(g) standards to GenAI processes the same way they treated TAR-focusing on reasonableness, transparency, and results.
Where Courts Stand Today
Judges have sanctioned fake citations and sloppy oversight, but they haven't banned AI. Several courts now require disclosure or certification when AI is used, while others expect competence, supervision, and source checking as a baseline. The signal is clear: AI is acceptable if your process is defensible.
Make GenAI Defensible Under Rule 26(g)
- Use policy: Define approved use cases (e.g., first-pass drafting, summarization, brainstorming, privilege log drafts). Prohibit final filing without human review.
- Disclosure standard: Decide what you'll disclose, when, and to whom. Be prepared to describe process, not prompts.
- Confidentiality: Use enterprise tools with no training on your data, encryption at rest/in transit, and clear retention/deletion terms.
- Documentation: Record model/version, settings, date, dataset provenance, and who reviewed what. Keep a concise audit trail.
- Quality control: Sampling, dual-review on sensitive work, and mandatory citation checks for any legal authority.
- Vendor diligence: Security, data use/IP terms, audit rights, breach notice, and jurisdiction. No hidden model training on client data.
Discovery Workflows Ripe for GenAI (With Guardrails)
- First-pass review: Issue spotting, cluster descriptions, and rationale summaries to speed human decisions.
- RFPs/RFAs/Interrogatories: Drafting, refining definitions, and creating alternative formulations.
- Privilege logs: Draft descriptions from metadata and email content; attorneys finalize.
- Deposition and hearing prep: Summaries, witness outlines, and cross themes with cite-backed exhibits.
- TAR + GenAI: Use TAR for classification/recall and GenAI for explanations and summaries. Validate both separately.
Validation You Can Defend
- Acceptance criteria: Define accuracy targets up front (e.g., citation accuracy 100%, privilege precision ≥95%, summary error rate ≤5%).
- Blind sampling: Random samples reviewed by attorneys; track error rates and remediation.
- Elusion sampling: Test for what was missed in non-responsive sets for critical issues.
- Stress tests: Adversarial prompts and edge-case documents; require pass/fail thresholds before production use.
- Change control: Lock model versions for a matter; revalidate on upgrades.
Ethics and Evidence Touchpoints
- Competence: Keep skills current on benefits and risks of AI (see ABA Model Rule 1.1 Comment 8). ABA Model Rule 1.1
- Confidentiality: Protect client data (Rule 1.6) and supervise vendors and staff (Rule 5.3).
- Evidence: For AI-generated exhibits, be ready on authenticity (FRE 901), foundation, and potential 403 issues. Anchor every statement in produced ESI or admissible sources.
- Work product: Treat outputs as drafts; maintain source citations and rationale so a human can explain decisions.
Meet-and-Confer Checklist (Add to Your ESI Playbook)
- Whether and where GenAI will be used (drafting, summaries, privilege logs).
- Any planned disclosures about AI process (without divulging prompts or privileged content).
- Quality controls: sampling sizes, acceptance criteria, error remediation.
- Clawback and privilege protocols mindful of AI workflows (FRE 502(d)).
- Security: data residency, vendor controls, and retention limits.
Contract Terms for AI Vendors
- No training on your data; clear IP ownership of outputs and your inputs.
- Security certifications, logging, and breach notice timelines.
- Model/version transparency; notice before changes; rollback options.
- Audit rights, indemnity for data misuse, and subcontractor controls.
30-60-90 Day Implementation Plan
- 30 days: Approve policy, select enterprise-safe tools, run a limited pilot on internal memos and summaries.
- 60 days: Add sampling templates, citation-check workflow, and a disclosure paragraph for ESI orders.
- 90 days: Expand to first-pass review and privilege log drafts with attorney sign-off; run a validation report you could show a court.
Signals The Moment Has Arrived
- An appellate or widely cited trial opinion endorsing a GenAI workflow as reasonable with documented controls.
- Local rules or standing orders referencing AI usage, disclosure, and validation expectations.
- Industry-standard evaluation methods (mirroring TAR's recall/precision era) cited by courts.
Bottom Line
Courts will judge GenAI by reasonableness, not hype. Build policies, controls, and documentation now so you can show your work the day a judge asks. The firms that prepare will set the standard-just like early adopters did after Da Silva Moore.
For structured skill-building on prompts and AI-enabled workflows, explore practical courses here: Courses by Job and Prompt Engineering.
For risk frameworks to support your validation plan, see the NIST AI Risk Management Framework.
Your membership also unlocks: