Litigation AI Monitor - January 2026
AI is showing up in chambers. It can speed research, streamline drafting, and widen access to justice. It can also undermine trust if it short-circuits human judgment, exposes sensitive data, or bakes in bias. Here's a practical guide for legal professionals on how courts can use AI ethically-and how advocates should respond.
Judicial Use of AI: Ethical Issues
AI can help courts move faster. It can also create new failure modes that are easy to miss under deadline pressure. Six issues deserve consistent attention.
1) Judicial independence
Rule 1.2 of the ABA Model Code of Judicial Conduct asks judges to promote public confidence in an independent judiciary. That means AI cannot replace human judgment. The risk isn't science fiction-it's automation bias: treating a confident AI answer as correct without verifying it.
Recent missteps prove the point. Two federal judges withdrew opinions after staff used AI and introduced errors. A Georgia appellate court vacated a trial court order that relied on nonexistent authority (Shahid v. Esaam, 918 S.E.2d 198 (Ga. Ct. App. 2025)). The fix is simple: treat AI outputs like any research lead-verify facts and law before relying on them.
2) Proper oversight of staff
Under Rule 2.12, judges must supervise chambers consistent with ethical duties. If an intern or clerk uses AI, the judge owns the result. Courts should set clear written rules: approved tools, banned uses, verification steps, and sign-off requirements for anything entering the record.
3) Bias in AI
Rules 2.2 and 2.3 require impartiality and a bench free from bias. AI can reflect skewed training data and produce disparate impacts. Studies of certain risk assessment tools, for example, found higher false-positive rates for Black defendants compared to white defendants.
Judges and staff should ask basic questions before relying on outputs: What data trained the tool? Is it representative for this use? What are known error rates across demographics? If the answers are vague, caution is warranted.
4) Disclosure of AI use
Traditionally, the judiciary explains its decisions in opinions, not in process logs. Mandatory disclosure of AI use is debated, and rules are emerging. California instructs judges to consider disclosure if generative AI creates content provided to the public and to remove biased or harmful content before use.
In practice, disclosure may be unnecessary for research or document summarization if accuracy is independently verified. High-impact uses-like drafting substantive analyses-may justify more transparency.
5) Exposure of PII
Treat public AI platforms as public. Sensitive filings, sealed materials, or PII should not be uploaded to public tools. Courts should prefer secure, enterprise deployments with contractual safeguards, logging, and data-retention controls, or avoid uploads entirely.
6) AI training and competence
Rule 2.5 ties judicial duties to competence and diligence. Similar to lawyers' technology competence, judges and staff should maintain baseline literacy in AI's benefits and risks. Several states have issued ethics opinions underscoring that obligation. Ongoing education reduces both overreliance and reflexive rejection.
Do current ethical rules prohibit judges from using AI?
No. There's no blanket prohibition in the United States. The guardrails come from existing ethics rules and emerging state policies. The through-line: AI cannot substitute for human judgment, sensitive information must be protected, and outputs must be verified.
- New Jersey's judiciary issued principles emphasizing judicial responsibility, ongoing monitoring of AI developments, protection of sensitive information, and measures to ensure fairness.
- Illinois and Delaware adopted policies with similar themes: human judgment first, data security, and caution with public tools.
- California bars entering confidential or nonpublic information into public generative AI systems and requires reasonable steps to remove biased or harmful content from AI materials used by judges (Cal. Rules of Court 10.80).
- New York's interim policy warns that AI is not a substitute for human judgment and treats any input to public models as effectively public, discouraging uploads even of "public" documents that could later be sealed.
Appropriate uses exist. AI can surface research leads, summarize records, draft timelines, and help with low-risk drafting (emails, short orders), with verification. Substantive opinions demand heightened oversight and line-by-line review with citations checked to the source.
How have judges reacted to the judicial use of AI?
Reactions vary. Many judges are curious about efficiency gains, and equally concerned about litigants submitting error-filled, AI-written filings. Committees across jurisdictions are studying use cases and risks.
Some judges have experimented with language questions, using AI to test ordinary meaning in opinions, e.g., "landscaping" and "physically restrained" (Snell v. United Specialty Ins. Co., 102 F.4th 1208 (11th Cir. 2024) (Newsom, J., concurring); United States v. Deleon, 116 F.4th 1260 (11th Cir. 2024) (Newsom, J., concurring)). Judges in D.C. have explored similar uses (Ross v. United States, 331 A.3d 220 (D.C. Cir. 2025) (Deahl, J., dissenting)). Others report using AI for summaries, and some have published practical frameworks for chambers use. Skeptics remain, citing accuracy concerns. Education and tight guardrails are closing the gap.
Is there a need to create new rules to address judicial use of AI?
For now, no. Technology-neutral ethics rules already cover independence, impartiality, competence, confidentiality, and supervision. AI-specific conduct rules risk getting outdated fast and adding complexity without adding clarity.
Where new guidance would help is on evidence: how to treat AI-generated material, validation standards for risk tools, disclosures about training data, and instructions clarifying that AI outputs are not evidence absent foundation.
Practical tips for litigators
- Assume mixed AI literacy on the bench. Explain the tool, data sources, methods, limitations, and error rates in plain language if they matter to your motion.
- Verify everything. Attach key authorities, include pincites, and add short parentheticals. If AI assisted, treat it as a lead-not a source.
- Propose protective language. Bar uploads of PII or nonpublic materials to public AI tools. Mirror California and New York principles where appropriate.
- Address automation bias. Invite the court to confirm independent review of any AI-assisted analysis. Flag areas where AI is known to struggle (e.g., hallucinated citations, ambiguous statutes).
- Scrutinize risk tools. Demand validation studies, demographic performance metrics, and disclosure of variables. Consider Daubert/Frye challenges or limiting instructions.
- Offer workable chambers protocols. Suggest verification checklists, allowed vs. disallowed tools, and extra review for opinions.
- Protect the record. If a party relies on AI outputs, seek transparency about model type, data provenance, and safeguards. Ask for the underlying sources, not just the conclusions.
- Practice data hygiene. Redact early and often. Use enterprise-grade tools approved by your client or court. Keep an internal log of AI use for quality control.
- Educate succinctly. A short, neutral appendix or bench brief on AI issues can prevent confusion and reduce side litigation.
Key resources
Further learning
If your team needs structured training on AI fundamentals and legal workflows, browse this catalog:
Your membership also unlocks: