Judges use AI for administrative tasks but insist on human control over decisions, WVU study finds

U.S. judges are using AI to summarize documents and prep for hearings, but a WVU study found all 13 interviewed insist humans must retain full control over decisions. Hallucinations remain their top concern.

Categorized in: AI News Legal
Published on: May 11, 2026
Judges use AI for administrative tasks but insist on human control over decisions, WVU study finds

Judges Cautiously Adopt AI While Keeping Final Say on Decisions

Judges across the United States are using generative AI in their courtrooms, but they're not ceding control over judicial decisions. New research from West Virginia University found that judges view these tools as administrative helpers, not replacements for human judgment.

Amy Cyphert, an associate professor at WVU's College of Law, led interviews with 13 state and federal judges to understand how they're actually deploying generative AI and LLM tools. The research emerged from a practical gap: policymakers were debating AI in the abstract while judges were already using it.

"Every single judge we spoke with was clear-eyed about this," Cyphert said. "They see these tools as helpful, but they also believe very strongly that the responsibility for decision making must remain entirely human."

How Judges Are Using AI Today

The judges reported using AI for document summarization, organizing case materials, drafting speeches, and preparing questions for oral arguments. They described the technology as a force multiplier that handles preparatory work, freeing time for actual judging.

Some judges see broader potential. AI could improve accessibility for people navigating courts without legal representation through clearer explanations and easier-to-follow procedures.

Those benefits come with costs. Verifying AI outputs takes additional time, and errors can undermine public confidence in the courts.

The Hallucination Problem

All 13 judges cited AI hallucinations-instances where systems generate false or misleading information with confidence-as a primary concern. Sometimes errors are obvious. Often they're not.

A single mistake in a court opinion or filing could damage public trust in the judiciary. Judges are approaching these tools with corresponding caution.

Privacy and cybersecurity concerns also shape their use. Many judges avoid applying AI to confidential or sealed materials and monitor how staff members deploy the technology.

What Judges Need Going Forward

The research identifies growing demand for clearer policies on disclosure, acceptable use, and ethical guidelines. Judges want practical training on using AI effectively and spotting errors before they reach the record.

The findings suggest that AI for legal work will develop alongside the technology itself, shaped by training, policy, and an ongoing commitment to human judgment.

The white paper is part of a broader effort by the AI Policy Consortium for Law and Courts, a collaboration between the National Center for State Courts and the Thomson Reuters Institute.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)