Israeli Court's Ruling Undermined by Inaccurate AI-Generated Legal Citations
A small claims court ruling in Tel Aviv contains fabricated and distorted quotations from Israeli law, raising questions about judicial use of artificial intelligence in drafting decisions.
Senior Registrar Michael Shempel issued the ruling in January in a case involving the Spam Law. The decision includes several citations of legislation that are either inaccurate or do not exist as written, with evidence pointing to careless or incorrect AI use.
What the Ruling Got Wrong
The registrar cited Section 30A(e) of the Communications Law as stating that "a recipient may at any time notify the advertiser that they refuse to receive advertising material." The actual law says advertisers who send material "must include certain details in a clear and prominent manner so as not to mislead," followed by specific requirements. The ruling omits these elements and uses the distorted version to conclude the defendant violated the law.
In another instance, the ruling defines advertising as "a message distributed for commercial purposes, intended, directly or indirectly, to encourage the purchase of a product or service." The statutory definition does not include the phrase "directly or indirectly."
The ruling also misquotes Section 30A(b), attributing language to the law that grants discretionary authority to the Minister. The actual statute contains detailed consent provisions and exemptions but does not include such ministerial discretion.
Pattern of AI Errors in Israeli Courts
This is not the first time Israeli courts have confronted AI-generated inaccuracies. In February, Justice Gila Canfy-Steinitz rejected a High Court petition based on fabricated judgments and citations created with AI assistance. "The use of artificial intelligence tools by lawyers does not absolve them of their professional responsibility and judgment," she wrote.
Days later, Supreme Court Justice Noam Sohlberg dismissed a similar petition. "There is concern that we are dealing with a phenomenon, or a troubling wave," he said. In March, the Supreme Court ordered Ramat Gan Municipality to pay 30,000 New Israeli Shekels after finding it relied on fabricated AI-generated quotes in court filings.
New Guidelines Fall Short
In March, the Courts Administration published a code of ethics for judicial use of AI. The code requires judicial officers to exercise independent judgment, disclose AI use, and "verify every factual determination, quotation, reference to legislation and case law, legal summary, and any other information against authoritative sources."
The code was issued after Shempel's ruling.
Court's Response
The Courts spokesperson acknowledged the problem. "There were indeed inaccuracies in the citation of statutory provisions, and this is regrettable. However, these inaccuracies do not affect the outcome of the ruling," the spokesperson said. The decision may be appealed.
For legal professionals using AI tools, these cases underscore a critical distinction: AI can accelerate research and drafting, but it cannot replace verification against primary sources. AI for Legal work requires human oversight at every step, particularly when citations and statutory language are at stake.
Those responsible for legal document accuracy should review AI Learning Path for Paralegals, which covers document review, contract analysis, and research automation with proper verification protocols.
Your membership also unlocks: