AI answers vs. legal advice: trust, time, and the risk of wrong citations
Wednesday, November 5, 2025
More clients are showing up with ChatGPT printouts and using them to challenge legal advice. That pressure is real-and it eats time. As one legal leader put it, it means lawyers spend more time than necessary untangling AI's confident but shaky claims.
Why AI can sound right and still be wrong
Large language models predict the next word. They're trained on huge datasets-news, forums, blogs, and yes, social media. That breadth makes them fluent, not authoritative.
As a project lead in a legal competence center noted, the key question is simple: what does a statistical model say is "true" after learning from millions of Twitter/X posts? That's why you always ask for sources-and then verify them.
The problem you're seeing inside legal work
Members use AI for quick answers on both simple and complex matters, then ask lawyers to reconcile conflicts. The end result: duplication, rework, and risk if no one checks the citations.
AI is useful, but it's not an encyclopedia. Treat it like a fast assistant that drafts and suggests-but doesn't decide.
Practical playbook for legal professionals
- Use AI for brainstorming, issue-spotting, checklists, plain-language summaries, and drafting alternatives. Don't let it be the final word on interpretation.
- Demand citations to primary sources: statutes, regulations, preparatory works, and court decisions. Verify them directly in official databases such as Lovdata.
- Ask for exact quotes with section numbers. Cross-check the quoted text word-for-word in the source.
- Run a counter-prompt: "List reasons this answer could be wrong." If the model surfaces conflicts or missing authorities, you just saved yourself a memo's worth of cleanup.
- Never paste client secrets. If you must, de-identify and strip facts that could reveal identities or strategy.
- Set a team policy: approved use cases, banned use cases, verification steps, citation format, and documentation standards.
- Log what you asked, what you got, what you verified, and what you changed. That audit trail pays for itself.
A recent cautionary tale
An advisor in TromsΓΈ Municipality relied on an AI-generated report about school closures. The sources looked legitimate-until they weren't. They turned out to be fake. A five-minute source check would have prevented the mess.
So, should you trust AI more than a lawyer?
No. Trust a lawyer's judgment and use AI to accelerate parts of the work you'll verify anyway. AI can propose; law decides.
A simple workflow you can adopt today
- Frame the question with jurisdictions, dates, and the exact issue.
- Require citations and URLs to primary law. No sources, no use.
- Verify in Lovdata or your official repositories. Keep screenshots or PDFs.
- Red-team the output: ask for opposing arguments and missing authorities.
- Document your decision and the verified sources. Move on.
Want structured AI training for legal work?
If you're setting standards for your team-use cases, prompts, and verification steps-this curated library can help: Complete AI Training - courses by job.
Bottom line: ask for sources, check the sources, and let legal judgment lead. That's how you get the speed of AI without the mess.
Your membership also unlocks: