Spanish Court Penalizes Lawyer for Filing AI-Generated Fake Case Citations
A Spanish attorney filed court documents containing fabricated judicial decisions created by artificial intelligence, prompting a disciplinary case that clarifies professional liability when using AI in legal work.
In October 2025, attorney I.G. submitted a complaint to the Sala de lo Social del Tribunal Superior de Justicia de Navarra on behalf of a company. The filing cited excerpts from supposed rulings by the Tribunal Constitucional, Tribunal Supremo, and other courts. Judges discovered the citations did not exist in official records. The case numbers and dates did not match any real decisions.
The court initiated disciplinary proceedings in February, charging bad faith. Spanish law allows fines between 180 and 6,000 euros for such violations, plus referral to professional associations for additional sanctions.
The lawyer's response
The attorney admitted the error immediately, calling it unintentional. She said she had not reviewed the text thoroughly before submission and had relied on AI assistance without adequate verification. She retracted all disputed citations and asked for a minimal warning rather than formal sanctions.
In a follow-up statement, she again expressed regret and requested the case be closed with only a verbal warning.
The court's decision
Judge MarΓa JosΓ© Ramo's ruling did not impose a fine, citing the lawyer's prompt admission and cooperation. The decision emphasized a key principle: lawyers remain responsible for document accuracy regardless of automation or AI involvement.
The court noted that AI systems commonly produce false information-known as "hallucinations"-particularly in legal contexts. Careless use can result in accusations of bad faith and abuse of process.
The ruling serves as notice to all legal professionals using AI: thorough verification of all materials is mandatory under professional ethics standards.
Broader implications
Spain's judicial system faces rising cases of AI-related errors in legal documents. Courts and professional associations are reviewing internal regulations to establish clearer standards for technology use in practice.
The case reflects a broader European debate about accountability when using AI in legal work. Even absent malicious intent, the person who submits the document-not the software-bears responsibility for its contents.
Learn more about AI for Legal professionals and understand how Generative AI and LLM systems can produce unreliable outputs.
Your membership also unlocks: