Judges use AI for administrative tasks but insist on keeping humans in control of decisions, study finds

U.S. judges are using AI to summarize documents and organize case materials, but none are letting it make legal decisions. A WVU study of 13 judges found courts treat AI as an assistant, not a decision-maker.

Categorized in: AI News Legal
Published on: May 02, 2026
Judges use AI for administrative tasks but insist on keeping humans in control of decisions, study finds

Judges Adopting AI for Routine Tasks While Keeping Human Control Over Decisions

Judges across the United States are using generative AI to handle administrative work-summarizing documents, organizing case materials, drafting speeches-but they are not delegating judicial decision-making to the technology, according to research from West Virginia University.

A white paper based on interviews with 13 state and federal judges found that courts are treating AI as a junior assistant. The tool handles preparatory work that frees judges to focus on legal reasoning and final judgments.

"Every single judge we spoke with was clear-eyed about this," said Amy Cyphert, associate professor in the WVU College of Law. "They see these tools as helpful, but they also believe very strongly that the responsibility for decision making must remain entirely human."

Where Judges Are Using AI

Judges reported using generative AI to:

  • Summarize lengthy documents and briefs
  • Organize case materials and evidence
  • Draft speeches and prepare questions for oral arguments
  • Improve accessibility for people without legal representation

Cyphert said some judges see potential for making court processes clearer and easier to navigate. "There are real opportunities here to make the system more accessible," she said. "Things like clearer explanations, better communication and easier navigation of court procedures could make a meaningful difference."

The Accuracy Problem

All 13 judges raised concerns about AI "hallucinations"-instances where the system generates false or misleading information with confidence. Judges said they must verify outputs carefully before using them.

The stakes are high. A single error in a court opinion or filing could damage public confidence in the judiciary. "They are very aware that even a single error like that could affect confidence in the courts," Cyphert said. "So, they are approaching these tools with a high level of caution."

Privacy and Data Security

Judges reported avoiding AI tools for confidential or sealed materials. Many are also monitoring how their staff use generative AI to prevent sensitive information from being shared in prompts.

"There's a lot of thoughtfulness around what information can safely be used with these tools," Cyphert said.

What Judges Need Next

The research points to gaps in training and policy. Judges expressed strong interest in practical guidance on using AI effectively, spotting errors, and sharing best practices across courts.

The field lacks clear policies on disclosure, acceptable use, and ethical guidelines. Establishing those standards will require coordination between courts, bar associations, and technology vendors.

"These tools are increasingly embedded in everyday software," Cyphert said. "What matters most is that judges and lawyers continue to do ethical work and strive for fairness in every case."

The research was conducted through the AI Policy Consortium for Law and Courts, a collaboration between the National Center for State Courts and the Thomson Reuters Institute.

For legal professionals looking to understand how AI is being deployed in courts, explore AI for Legal or the AI Learning Path for Paralegals to see how these tools are used in legal workflows.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)