Senior Lawyer Apologises for AI-Generated Errors in Victorian Murder Case
A senior defence lawyer in Victoria has formally apologised after submitting legal documents containing fabricated quotes and fictitious case references generated by artificial intelligence (AI). Rishi Nathwani, a King’s Counsel, accepted full responsibility for the inaccuracies in submissions related to a murder trial involving a teenage defendant.
During a hearing, Mr Nathwani expressed regret to Justice James Elliott, stating, “We are deeply sorry and embarrassed for what occurred.” The AI-generated misinformation caused a 24-hour delay in the court proceedings, disrupting the judge’s intention to finalise the case promptly.
Details of the AI Errors
The flawed submissions included fake citations from the Supreme Court and fabricated quotes from a speech to the state legislature. Justice Elliott’s associates identified the discrepancies when they were unable to locate the referenced cases and requested supporting documents from the defence team.
The defence lawyers admitted that some citations “do not exist” and confirmed the presence of “fictitious quotes” in their filings. They explained that while some initial citations were verified, they mistakenly assumed the remaining references were accurate without further checks.
Impact on the Case and Court’s Response
Justice Elliott later ruled that the minor defendant was not guilty of murder due to mental impairment. However, he emphasised the seriousness of submitting inaccurate material, saying, “The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice.”
The court also noted that the prosecutor, Daniel Porceddu, did not verify the accuracy of the submissions before responding. Reflecting on these events, Justice Elliott cited the Supreme Court’s guidelines released last year, which require AI-generated content to be independently and thoroughly verified before use in legal documents.
Legal and Ethical Considerations Around AI Use
This incident highlights the risks associated with relying on AI tools without proper validation in legal practice. The specific AI system used by the lawyers was not disclosed in court documents.
Similar issues have arisen internationally. In 2023, a U.S. federal judge fined two lawyers and their firm $7,600 for submitting fictitious legal research generated by ChatGPT during an aviation injury case. Meanwhile, in the UK, Justice Victoria Sharp warned that presenting false information as genuine could amount to contempt of court or, in extreme situations, perverting the course of justice—a charge carrying severe penalties, including life imprisonment.
Key Takeaways for Legal Professionals
- Verify AI-generated content thoroughly: Never assume AI-provided information is accurate without independent checks.
- Follow court guidelines on AI use: Courts increasingly expect lawyers to rigorously validate any AI-assisted materials before submission.
- Understand the risks of inaccurate submissions: False information can delay proceedings, damage professional reputation, and lead to serious legal consequences.
As AI tools become more common in legal research and document drafting, lawyers must prioritise accuracy and ethical responsibility to maintain trust in the judicial system.
For legal professionals interested in upskilling on AI use and best practices, explore targeted AI courses for legal jobs that emphasize safe and effective integration of AI technology in legal work.
Your membership also unlocks:
 
             
             
                            
                            
                           