Quebec man fined C$5,000 for AI-fabricated citations in aircraft dispute

Quebec court fined Jean Laprade C$5,000 for filing fake AI citations, deeming it highly reprehensible. Saga spans a seized aircraft, Interpol alerts, and a C$2.7m award.

Categorized in: AI News Legal
Published on: Oct 16, 2025
Quebec man fined C$5,000 for AI-fabricated citations in aircraft dispute

Quebec court fines self-represented defendant for AI-fabricated citations

A Quebec superior court has fined Jean Laprade C$5,000 (about US$3,562) for submitting AI-generated "hallucinations" in court filings. Justice Luc Morin called the conduct "highly reprehensible" and warned it risks undermining confidence in the legal system.

The decision, released 1 October, caps a legal saga the judge said "contains several elements worthy of a successful movie script," including a "hijacked plane," Interpol red alerts, and the "inappropriate use of artificial intelligence."

What happened

While in Guinea, Laprade brokered a deal for three helicopters and an airplane. A contract error awarded him an aircraft far more valuable than agreed, which he was accused of diverting to Quebec.

Two aviation companies sought recovery. In 2021, the Paris international arbitration chamber ordered Laprade to pay C$2.7m. The aircraft has remained under seizure at Sherbrooke airport since 2019.

The AI misuse at issue

In his defense, Laprade filed materials containing "eight instances of non-existent citations, decisions not rendered, references without purpose and inconsistent conclusions." The court had already cautioned the legal community in 2023 that any AI-generated content must be subject to "rigorous human control."

Justice Morin found the attempt to "mislead the opposing party and the Tribunal by producing fictitious extracts" to be a "serious breach." He emphasized that "the filing of a procedure remains a solemn act that can never be taken lightly."

Sanction and the court's stance on AI

Laprade, 74, apologized and said AI was key to mounting his defense without counsel. The court acknowledged the challenge but held him responsible: "He must bear alone all the opprobrium resulting from quotations 'hallucinated' by artificial intelligence."

The judge underscored both the promise and the risk: artificial intelligence "will seriously test the vigilance of the courts for years to come."

Practical takeaways for legal teams

  • Adopt a verification protocol: Treat AI output as a draft only. Independently validate every case citation, quote, and procedural reference in official reporters or court websites.
  • Require disclosure: If your court or bar recommends or mandates disclosure of AI assistance, include a short declaration and describe human review steps.
  • Ban fabricated sourcing: No citation enters a filing without a retrieved primary source (PDF or official URL) checked by a human. Keep copies in the file.
  • Use retrieval-first tools: Prefer systems that cite to primary law and provide links to official sources. Disable "creative" modes for legal research.
  • Keep an audit trail: Save prompts, model versions, and verification notes. If challenged, you can show your diligence.
  • Train your staff: Teach attorneys and support teams how AI can hallucinate, how to cross-check, and where AI is inappropriate (e.g., unreviewed legal analysis).
  • Set boundaries in engagement letters: Inform clients about how your firm uses AI, quality controls, and human accountability.
  • Apply the "solemn filing" test: If you wouldn't swear to it, don't file it. Every assertion and citation must stand on its own without AI crutches.

A quick checklist before filing AI-assisted work

  • Run a manual cite-check against official reporters/databases.
  • Open and read each cited decision; confirm holding, posture, and jurisdiction.
  • Verify quotes verbatim; confirm page/paragraph pin cites.
  • Confirm procedural rules and deadlines from primary sources.
  • Attach or bookmark certified sources in your internal file.
  • Document your human review and approval.

Context and resources

Interpol "red alerts" referenced in this case are explained here: Interpol Red Notices.

If your organization is building AI competence with a focus on verification and responsible use, structured training can help: Prompt Engineering Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)