AI in the courtroom: useful assistant, poor decision-maker
At the South Zone Regional Judicial Conference in Bengaluru, senior judges issued a clear warning: do not lean on AI to decide cases. Recent episodes of "hallucinated" citations and even a fabricated judgment presented in court underline the risk. AI can speed up research, but it does not replace human reasoning or constitutional analysis.
The message was simple and direct-consider AI, don't rely on it. Treat outputs like tips, not truths.
The digital divide has a fourth layer: AI
Justice M Sundar outlined how the judiciary's digital gap isn't just about money or devices. It runs across four lines that often overlap:
- Digital natives vs. immigrants: those raised with tech and those adapting to it.
- Digital rich vs. poor: access to devices, bandwidth, and secure systems.
- Digitally skilled vs. unskilled: competence with tools that now shape daily courtwork.
- AI adopters vs. AI skeptics: those who see AI as an aid and those who fear it weakens independent judicial thinking.
Consider, don't depend: why courts are pushing back
AI tools tune themselves to prompts and patterns. They do not "understand" facts, they just predict text. That's how fabricated case law slips in-plausible formatting with invented citations.
Courts have already seen lawyers submit AI-generated material that did not exist, including a fake "Supreme Court" judgment that was promptly dismissed and triggered administrative action. The takeaway: AI lacks cognition; it is statistical patterning, not legal reasoning.
Standards for judges, registries, and counsel
- Zero-trust on citations: independently verify every authority in official reporters or court websites before quoting or filing.
- Source-first rule: no AI-sourced case law without a working source link and pin-cite to an authoritative database.
- Human certification: add a short note in filings stating all authorities were checked by a human against official sources.
- Prompt hygiene: if AI is used internally, keep prompts factual and minimal; never insert privileged or identifying client data into public models.
- No AI-authored holdings: orders and judgments must reflect human reasoning grounded in statute, precedent, and constitutional principles.
- Training and audits: run periodic sessions for bench and bar on AI failure modes; random audit filings for fake citations.
Operational safeguards for courts
- Tool whitelists: prefer systems that expose sources and citations; avoid black-box outputs without verifiable references.
- Citation checkers: integrate automated cite-validation against official repositories; flag non-resolvable references.
- Bench-note discipline: use AI for chronology, issue spotting, and document collation-never for conclusions of law or ratio.
- Data protection: restrict uploads of case materials to approved, secure environments; log AI usage for accountability.
Access to justice: close the gap, don't widen it
Expanding e-service centres in remote areas is non-negotiable. Technology must reach litigants directly-filings, cause lists, orders, and payments should be usable with basic devices and patchy bandwidth.
- Mobile e-service units for filings, e-pay, and video hearings.
- Local language interfaces and on-site assistance.
- Connectivity audits and device kiosks at taluk and district courts.
Practical access beats fancy tools. If litigants can't use the system, the system fails them.
"Cyborg" judging, done right
No one is asking for robot judges. The workable model is human-led decisions supported by computation where it helps-search, timelines, duplication checks, and document organization.
- Let AI handle retrieval and summaries; let judges and lawyers do the weighing and reasoning.
- Cross-examine every generated fact and citation before it touches the record.
- Keep the why of the decision human: constitutional values, statutory text, and tested precedent.
Resources
- eCourts Services (Government of India)
- Mata v. Avianca, SDNY docket (AI "hallucinated" citations case)
Your membership also unlocks: