A federal judge has sanctioned the manager of a California law firm over a junior attorney’s artificial intelligence-assisted court brief that contained a false case citation, saying the responsibility for such errors extends to supervising lawyers.
U.S. Magistrate Judge Peter Kang in San Francisco in an order on Tuesday, opens new tab said the attorney, Lenden Webb, should have exercised greater oversight of a lawyer in his small law office who said she used AI to help craft the brief.
Jumpstart your morning with the latest legal news delivered straight to your inbox from The Daily Docket newsletter. Sign up here.
“Managers in law firms have an obligation to take reasonable steps to ensure all lawyers in the firm make ethical representations to the court,” the judge said.
Webb, managing partner of Southern California-based Webb Law Group, was admonished in the order, fined $1,001 and required to complete training on supervising attorneys and the ethical use of AI.
In a statement on Wednesday, Webb said “our firm has doubled down to continue to improve, and I believe we are at the forefront of creating value for our clients, which isn’t without the hiccups of issues such as a hallucinated citation.”
Courts across the country are contending with lawyers’ increasing use of AI programs for legal research and document drafting, which has led to sanctions and rebukes against attorneys who rely on the technology without fully vetting the results.
Kang’s ruling addressed a more novel issue: the culpability of supervising attorneys when lawyers working under them use AI programs carelessly.
“At minimum, a supervising lawyer should read and understand the content of all pleadings and check citations to ensure their accuracy,” the judge wrote.
The erroneous citation was contained in a July 21, 2025, filing related to gathering evidence in the underlying employment lawsuit. According to the judge, the citation included a real case name and a real case number, but the two were from different states and did not match, and the citation referred to a decision that did not appear in either case.
The junior lawyer who prepared the filing, Katherine Cervantes, told the judge at a hearing in August 2025 that she had used Thomson Reuters’ artificial intelligence tool Westlaw AI. Cervantes at the hearing told Kang that “something messed up” when she was copying and pasting material from Westlaw into the brief. She said it was her first time using AI-assisted research in Westlaw.
The judge in a September 2025 decision sanctioning Cervantes said there appeared to be “some inconsistency” in the lawyer’s explanations.
“While the record is less than clear as to the source of the fake citation, what is clear is that there was a breakdown and failure to check or even try to read the cited law,” Kang wrote.
It could not establish how the erroneous citation was produced.
A spokesperson for Thomson Reuters, the parent company of Reuters and Westlaw, in a statement disputed that its AI tools were responsible for the faulty material.
“Following an internal review, we found no evidence that the erroneous citations were generated by CoCounsel or Westlaw AI. Thomson Reuters designed CoCounsel to support lawyers, not replace their judgment. Our platform includes clear guidance that AI-generated outputs must be reviewed and verified by the attorney before use, because the responsibility for work product has always belonged to the lawyer, and always will.”
Cervantes did not immediately respond to requests for comment.
At a hearing, opens new tab in the case in November, Webb acknowledged failing to vet the submission despite having read a Thomson Reuters disclaimer saying the “provision of content and software entails the likelihood of some human and machine errors.”
Webb told the court he didn’t collaborate with the junior lawyer on the brief or help draft it. Still, he was counsel of record, and his name appeared on the filing, the judge said.