Pro se litigants are swapping lawyers for ChatGPT - and some are winning
AI is walking into court - through the hands of people who can't afford counsel. From eviction appeals to debt disputes, self-represented litigants are using chatbots to research, draft, and organize. The results are mixed: some real wins, and some costly sanctions.
What's happening in courtrooms
Lynn White, facing eviction and five figures in penalties, used ChatGPT and Perplexity to spot procedural errors, map strategy, and draft filings. Months later, she overturned the eviction and avoided roughly $55,000 in penalties and more than $18,000 in rent. Her take: AI worked like a virtual law clerk when nothing else was available.
Staci Dennett, sued over unpaid debt, had ChatGPT tear apart her drafts "like a Harvard Law professor." She negotiated a settlement and saved over $2,000, crediting AI for helping her hold her own with opposing counsel.
Others learned harder lessons. Florida entrepreneur Jack Owoc was sanctioned after submitting a filing with 11 fabricated citations. Earl Takefman cited a nonexistent case twice, then discovered the quotes he asked for were invented, too. A nurse managing dozens of pro se federal cases drew warnings over repeated filings with fake authorities.
Paralegals and attorneys report a surge of AI-authored briefs from pro se parties. A public database maintained by legal researcher Damien Charlotin tracks hundreds of decisions flagging AI misuse - from fake case law to misquoted and misread precedent.
The risk: hallucinations, misreads, and sanctions
Three failure modes keep appearing:
- Fabricated cases that don't exist
- False quotations from real cases
- Misrepresented holdings or context
Courts are responding with orders to disclose AI use, community service, fines, and vexatious litigant warnings. Opposing counsel report "AI slop" that drains resources and time.
For legal teams: a practical protocol you can enforce
- Establish an AI use policy: scope, approved tools, banned inputs (client identifiers, confidential facts), and mandatory verification steps.
- Require source-backed drafting: no citation appears unless verified in primary databases (Westlaw, Lexis, Fastcase, or Google Scholar) and Shepardized/KeyCited.
- Quote discipline: copy quotations from the source, not the model; maintain page/paragraph pins.
- Jurisdiction check: confirm the court, date, posture, and subsequent history; ensure the authority actually supports the proposition.
- Disclosure standard: if your court requires it, disclose AI assistance; otherwise disclose only if ordered or ethically necessary.
- Red-team your own work: prompt the model to attack your argument, then fix gaps with real sources.
- Remove tells: strip filler, odd formatting, and emoji; align with local rules and standard legal style.
- Logging: keep a private record of prompts, outputs, and validation notes for internal QA and potential court inquiries.
Responding to AI-generated filings from the other side
- Run a fast triage: control-F for quoted strings; search every citation; confirm docket numbers and reporters.
- If you find fabrication or misuse, consider targeted relief: motions to strike, fee-shifting, or sanctions under applicable rules.
- Offer the court a clean roadmap: a short table identifying each false cite/quote with the correct authority or note of nonexistence.
What pro bono and clinics are testing
Some clinics train self-represented litigants to use AI with guardrails: how to prompt for structure, how to fact-check, and how to cross-verify one model's output with another. Graduates have reported wins in small-stakes matters, especially where procedure and formatting help equalize the field.
Provider policies and reality
Major AI systems often warn users not to rely on them for legal advice, yet most will still produce direct answers. See Google's Terms of Service for an example of these limits.
Ethics and enforcement anchors
Expect more courts to require disclosure, certify diligence, or impose sanctions for fabricated content. Rule-based guardrails already exist; the technology just makes their application more frequent.
Fed. R. Civ. P. 11 - Representations to the Court; Sanctions (LII)
How to use AI without getting burned
- Treat AI as a drafting assistant, not an authority. The authority is the source you cite.
- Never outsource citation generation. Ask the model for issues and structure; you supply and verify the law.
- Adopt a "two-database rule": verify every citation in at least two independent sources when feasible.
- Use retrieval-augmented prompts: paste the case/excerpt and ask for analysis limited to that text; forbid the model from inventing citations.
- Institute partner-level review for any filing touched by AI.
The ceiling - and the opportunity
AI can accelerate outlines, analogies, and first drafts. It cannot replace legal judgment, procedural strategy, or precise use of precedent. Lawyers who combine speed with rigorous validation will run faster without tripping the wire.
Optional training for legal teams
If your firm is building internal AI capability, structured courses can shorten the learning curve while enforcing verification habits.
Complete AI Training - Courses by Job