Senior Lawyers Face Sanctions for Junior Attorneys' AI Errors
A federal judge in San Francisco has held a law firm manager personally liable for a subordinate's flawed AI-assisted court filing, establishing that supervising attorneys cannot escape responsibility when junior lawyers use artificial intelligence carelessly.
U.S. Magistrate Judge Peter Kang sanctioned Lenden Webb, managing partner of Webb Law Group in Southern California, for failing to properly oversee a junior attorney who used AI tools to draft a brief containing a false case citation. Kang fined Webb $1,001, required him to complete training on attorney supervision and ethical AI use, and issued a formal admonishment.
The case centers on a July 2025 filing where junior attorney Katherine Cervantes cited a case that paired a real case name with a real case number-but from different states and unrelated to each other. The citation referenced a decision that appeared in neither case.
Cervantes told the court she had used Thomson Reuters' Westlaw AI tool and that "something messed up" while copying and pasting material into the brief. She said it was her first time using the AI-assisted research feature.
Supervisors Must Verify AI Output
Kang's ruling establishes a clear expectation: managing partners cannot delegate responsibility for accuracy. "Managers in law firms have an obligation to take reasonable steps to ensure all lawyers in the firm make ethical representations to the court," the judge wrote.
He added that supervising lawyers must at minimum "read and understand the content of all pleadings and check citations to ensure their accuracy." Webb, who was counsel of record on the filing despite not collaborating on it, had acknowledged failing to vet the submission.
Webb noted in a statement that he had read a Thomson Reuters disclaimer acknowledging the "likelihood of some human and machine errors" but did not act on it. The disclaimer itself-which Thomson Reuters includes on its platform-underscores that lawyers, not AI vendors, bear final responsibility.
Broader Pattern of AI Mistakes in Courts
This case reflects a wider problem courts are confronting. Across the country, judges have sanctioned attorneys for submitting AI-generated research containing fabricated citations, invented case law, and false precedents.
Thomson Reuters disputed that its tools generated the erroneous citation. A company spokesperson said the firm found "no evidence that the erroneous citations were generated by CoCounsel or Westlaw AI" and reiterated that lawyers must review and verify all AI output before filing.
The question of how the false citation was produced remains unclear. Kang noted in an earlier decision that there was "some inconsistency" in Cervantes' explanations about the citation's source.
What This Means for Law Firms
The ruling creates direct financial and professional consequences for partners who fail to supervise AI use. Webb's $1,001 fine and mandatory training are modest, but the precedent is significant: courts will hold senior lawyers accountable for verifying work product generated with AI assistance.
For law firms implementing AI tools, the decision reinforces that AI for Legal work requires the same level of human oversight as traditional research and drafting-perhaps more, given the known risks of AI hallucination in legal citations.
Firms deploying tools like Westlaw AI or similar platforms should establish clear verification protocols and ensure all attorneys understand that AI output is a starting point, not finished work. Partners and supervising attorneys remain liable for accuracy regardless of which attorney or tool prepared the initial draft.
Your membership also unlocks: