Why Legal AI Needs Better AI Architecture, Not Generative Guesswork
When someone's liberty hangs on a brief riddled with fabricated citations, the problem isn't a bad prompt. It's the wrong architecture. In early 2025, Kyle Kjoller, a welder in Nevada County, California, was held without bail after prosecutors filed an 11-page brief that his lawyers say showed generative-AI fingerprints: misread law, invented quotes, and misstated constitutional text. Legal and tech scholars urged the state's high court to scrutinize unchecked generative AI in prosecutions because it risks due-process harms and wrongful convictions.
The Hidden Danger: Generative "Helpfulness"
Generative AI is built to be persuasive. Ask it to support a position and it will often produce confident-sounding authority that fits the argument - even when the source doesn't say that, or doesn't exist. This isn't hypothetical. Courts have sanctioned AI-fabricated citations, and Stanford's AI Index has flagged the pattern as a persistent real-world risk.
The Wrapper Wave Problem
Legal AI adoption is surging. Firms are rolling out tools wrapped around large language models - clean UX, helpful prompts, and guardrails. Harvey, for example, is used by more than half of the top U.S. firms and is explicitly an LLM-powered drafting and review platform.
The catch: most of these tools are still generative-first. Even with retrieval or "knowledge grounding," the model is usually allowed to free-generate around what it pulled. That freedom is where hallucinations sneak back in. It's not a vendor issue. It's an architecture issue.
The question isn't whether to use these tools. It's whether your team can tell the difference between generative help and closed-loop trust.
Generative vs. Closed-Loop AI
Generative models predict what text should come next. They can produce arguments that read well, complete with "plausible" citations, while drifting away from the record. Courts are treating that drift as a competence and candor problem.
Closed-loop AI flips the rules. The system is constrained to verified sources you provide - transcripts, exhibits, discovery, case records - and every output is grounded in that material. It can summarize, extract, classify, and map connections. It cannot invent authority, quotes, or facts that aren't supported by a source. It also isn't reaching out to the open internet during that work.
Does this eliminate all risk? No. A summary can still miss nuance or tone. But by design, the model stays inside the record - which materially cuts hallucination risk where it matters.
What This Means for Everyday Legal Work
The Kjoller matter shows why architecture is the conversation. Blaming speed or training ignores reality: legal work is high volume by default, and "just have a human catch it" doesn't scale. The answer isn't banning AI. It's choosing AI that's constrained by design.
In deposition or evidence review, a generative system can invent a clean story of what a witness "probably meant." A closed-loop system keeps you anchored to the transcript and exhibits, and flags thin support instead of glossing over it. That lets you move faster without drifting beyond the record.
That shift builds a different foundation of trust:
- Verifiability: every statement ties back to a source
- Auditability: a clear trail from output to record
- Defensibility: you can show receipts, not just say "we checked"
Rev's Approach to Closed-Loop AI
Closed-loop isn't a buzzword. It's what the practice has always required: stick to the record. At Rev, the technology is grounded in customer-owned source files. When AI assists with summarizing testimony, extracting issues, or surfacing key moments, outputs stay tethered to the record - with provenance your team can inspect and defend.
We've seen this in real trials. In a recent criminal defense case, Greening Law Group used Rev to spot contradictions across body-cam and interview footage, build cross faster, and work evidence in real time. That closed-loop workflow helped secure dramatically reduced sentences for their client.
The goal isn't to make legal work "more generative." It's to help teams move faster inside the boundaries of what's actually been said and recorded - and do it in a deployment model that builds confidence, not risk.
The Path Forward
Every hallucinated citation chips away at trust - in AI-assisted work and in the system itself. Done right, AI can help overburdened defenders, help prosecutors manage evidence responsibly, and expand access to services. Done wrong, the field either retreats from AI or normalizes fake authority. Both are bad outcomes.
The threat isn't AI. It's AI optimized for plausibility over verifiability. Law runs on the integrity of the record and the integrity of citations. Generative models break that by sounding right without being right. Closed-loop systems rebuild it by tethering outputs to verified sources. And yes, even then, the duty remains: validate, understand limits, and apply professional judgment. Closed-loop makes that work safer and more defensible. It doesn't make it optional.
Kyle Kjoller's liberty shouldn't hinge on whether a model hallucinated a prosecutor's research. Neither should anyone else's.
If your team is standing up practical training on closed-loop workflows and AI oversight, you may find these resources useful: AI courses by job.
Your membership also unlocks: