The AI law era is here: LexisNexis CEO Sean Fitzpatrick on "courtroom-grade" AI, accuracy, and the future of legal work
LexisNexis has shifted from being the legal world's library to an AI-powered drafting and research platform. CEO Sean Fitzpatrick says the company's new system, Protégé, is built for one outcome above all: accuracy grounded in real law.
That's a bold promise in a year where AI-fueled hallucinations have wasted court time, triggered sanctions, and eroded trust. Fitzpatrick's pitch is simple: general AI is probabilistic; courts demand determinism. Protégé is engineered to close that gap.
From search to drafting: what Protégé actually does
LexisNexis isn't building a foundation model. It's building applications on top of them. Protégé uses a model-agnostic, agent-based approach to plan tasks, pick the best model for each step, and ground everything in LexisNexis' curated corpus.
- Ground truth: 160B documents and records, curated and updated.
- Built-in verification: a citator "agent" checks whether a cited case exists and whether it's still good law.
- Transparency: links back to sources, summaries, headnotes, and the logic behind answers.
- Privacy: matter-specific "vaults" and enterprise-grade security to preserve privilege.
- Human review: an internal "army of attorneys" QA outputs and tunes prompts, workflows, and guardrails.
The end goal: move beyond search into structured reasoning and drafting, while keeping a lawyer in the loop.
Accuracy is the product
Recent sanctions show the risk of consumer AI in court filings. In Mata v. Avianca, lawyers were penalized for citing fabricated cases. Fitzpatrick expects someone will eventually lose a license for sloppy AI use.
Protégé's countermeasures are practical: verify every case, expose the chain of reasoning, link to sources, and enforce format checks. It's not glamorous, but this is what "courtroom-grade" should mean.
The apprenticeship problem you can't ignore
AI nukes the traditional pipeline of junior work. Associates used to learn by doing research, drafting first passes, and getting edited. Now a partner can spin up 300 deposition questions in seconds and skim the list.
If you don't redesign training, you'll get faster output and weaker lawyers. Here's what firms are testing:
- AI-first rotations with human grading: juniors generate, seniors score with rubrics, and juniors revise until they pass a standard.
- "Source-or-it-doesn't-exist" policy: no text survives without a cited, checked authority.
- Red-team reviews: assign an associate to attack the AI's draft and find missing issues.
- Manual sampling: partners randomly spot-check sections against the record and the law.
- Skills ledger: track who has proven competence (fact development, motion strategy, cross outlines) with and without AI.
How far to automate? Draft, yes. File, no.
Fitzpatrick is clear: keep humans in the loop. Protégé can draft continuances or motions from scratch, or build from your prior work in your DMS. It can alert you to deadlines and propose drafts.
But auto-filing without human review? Not today. Not for substantive matters. The stakes are too high.
Judges, clerks, and originalism: where to draw the line
Some judges are experimenting with corpus linguistics and AI to backfill "original meaning." The risk is obvious: outsource the reasoning, and you'll outsource the judgment. Fitzpatrick's view: AI can assist with structure and clarity, but people must own the decisions and the words.
If you're on the bench or briefing a court, consider these guardrails:
- Disclosure: state whether AI assisted and how the output was validated.
- Source transparency: require links to the underlying authorities, not summaries.
- Auditability: maintain prompts, versions, and verification logs.
- No bot-to-bot adjudication: humans must review submissions and draft opinions.
For ethics guidance, see the ABA's Formal Opinion on lawyer use of generative AI (ABA Formal Opinion 512).
Inside the stack: why this might work
- Agentic orchestration: planning agent routes subtasks to the best models (e.g., deep research vs. drafting).
- Cost curve: token prices keep dropping, enabling depth at scale.
- Localization: same core tech, localized content and mark-up by jurisdiction; no mixing of cross-border law.
- Attorney feedback loop: practicing lawyers review outputs by task and practice area to reduce blind spots.
Adopt AI without getting burned: a practical checklist
- Vendor due diligence
- Ask for source transparency, model provenance, update cadence, and jurisdiction coverage.
- Demand a live demo showing citation verification and error handling.
- Ground truth and verification
- Enforce "link or reject." No authority, no output.
- Require Shepardizing/KeyCiting equivalents and date stamps.
- Security and privilege
- Use firm-controlled vaults. Confirm no training on your data.
- Log access, prompts, and outputs like work product.
- Policy and training
- Codify acceptable use, disclosure rules, and mandatory review steps.
- Train partners and staff on verification workflows and common failure modes.
- Billing and disclosure
- Define how AI-accelerated work is billed and described to clients.
- Align with court directives on AI disclosures in your jurisdictions.
- Quality metrics
- Track accuracy, time saved, revision counts, and client outcomes.
- Publish internal scorecards per practice group.
30/60/90-day playbook
- Days 1-30: Pick two high-volume tasks (e.g., deposition outlines, discovery responses). Pilot with 3 partners, 3 associates, 1 KM lead. Baseline quality and time.
- Days 31-60: Roll in verification steps, red-team reviews, and disclosure templates. Tune prompts and vault content. Start a training series.
- Days 61-90: Expand to two more tasks per practice. Publish metrics. Update the firm AI policy. Decide go/no-go for matter intake, alerts, and templated motion drafting.
Where this lands
AI is already in the courtroom. The question isn't whether you'll use it - it's how you'll control it. Fitzpatrick's stance is pragmatic: ground everything in real law, show your work, and keep a human hand on the wheel.
Do that, and you'll get faster drafts, tighter arguments, and fewer ethics risks. Skip it, and you'll spend your time explaining hallucinations to a judge. Choose wisely.
Level up your team's AI fluency
If you're formalizing AI skills by role, explore curated training paths by job function here: AI courses by job. For a cross-vendor view, see AI courses sorted by leading AI companies.
Your membership also unlocks: