AI Judges Don't Think Like Human Judges, Legal Scholar Says
A University of Chicago law professor tested whether artificial intelligence could replace human judges and found a fundamental divide: AI follows rules mechanically while humans balance law with judgment, experience, and context.
Professor Eric Posner presented the findings in April at the university's Ryerson Lecture, an annual address showcasing faculty scholarship. His research examined how large language models handle judicial decisions and why, despite growing use in courts, they are unlikely to displace human judges.
AI Is Already in the Courtroom
Federal judges are already using AI tools. Surveys show a majority of federal judges report using AI in some capacity, and some have publicly acknowledged experimenting with it.
One federal appellate judge consulted an AI model to determine whether an in-ground trampoline qualified as "landscaping" under an insurance policy. The judge disclosed the use, called the AI's answer helpful, though it did not ultimately affect the case outcome.
Beyond isolated uses, AI-driven arbitration platforms are emerging. The American Arbitration Association has developed one, promising faster and cheaper dispute resolution. These systems may gain traction in private adjudication before appearing in courts.
The legal system also faces problems from AI. Courts increasingly receive filings with AI-generated text containing fabricated legal citations-called "hallucinations"-prompting sanctions and ethical concerns.
How AI and Humans Decide Cases Differently
Posner and collaborators ran experiments comparing AI and human decision-making in legal scenarios. They tested whether judges follow rules strictly (a "formalist" approach) or factor in broader considerations like fairness or sympathy (a "realist" approach).
In a war crimes case study, human judges were influenced at the margins by positive or negative attributes of defendants. AI models behaved differently.
The AI applied the law without deviation. It disregarded sympathy entirely and followed rules with complete consistency.
In a complex "choice of law" scenario, where courts must determine which jurisdiction's law applies, human judges produced inconsistent outcomes and occasionally made factual or legal errors. AI models applied the governing rules without mistake.
Posner noted this apparent strength masks a weakness. Law students, like AI, tend to apply law rigidly and formally. "Would you want law students to be judges?" he asked.
The comparison reveals something deeper: human judging has never been purely mechanical. From early 20th-century legal realism to modern debates over originalism, scholars have long recognized that judicial decisions reflect not only rules, but judgment, experience, and social context.
AI is trained on the "official story" of law-the formal reasoning in judicial opinions-without access to the underlying motivations or institutional dynamics that shape real decisions.
Three Obstacles to AI Judges
Posner identified three main barriers to replacing human judges with AI.
- Opacity. AI systems cannot reliably explain their own reasoning. While they produce plausible legal arguments, it is unclear whether the reasons match their actual motivations for decisions.
- Institutional complexity. Courts operate within hierarchies, interact across jurisdictions, and respond to political and social pressures. Human judges are embedded in this structure in ways that would be difficult to replicate artificially.
- The gap between story and reality. AI systems trained on formal legal texts may faithfully reproduce the rhetoric of judging without capturing how decisions are actually made.
Where AI Will Actually Show Up
Posner sees reasons for cautious optimism about AI's role in law. AI is effective at identifying patterns across recurring fact scenarios-a core feature of legal reasoning. It also produces polished, coherent opinions difficult to distinguish from human-written ones.
Yet he remains skeptical that AI will displace judges. A quieter transformation is more likely: judges will increasingly rely on AI tools behind the scenes, even if they do not always acknowledge it publicly.
Posner finds the prospect of continued human involvement essential, especially in close cases that could go either way.
"I want a human to flip the coin so we can argue about it," he said. "I don't want an LLM to do that."
For legal professionals, understanding these dynamics matters. As AI tools proliferate in legal workflows-from document review to research-the distinction between what AI can do mechanically and what requires human judgment becomes increasingly relevant to how lawyers and judges work. Resources on AI for legal professionals can help practitioners navigate this shift.
Your membership also unlocks: