Skip to main content

NY lawyers fined for using fake ChatGPT cases in legal brief

The clumsy use of ChatGPT has landed a New York City law firm with a $5,000 fine.

Having heard so much about OpenAI’s impressive AI-powered chatbot, lawyer Steven Schwartz decided to use it for research, adding ChatGPT-generated case citations to a legal brief handed to a judge earlier this year. But it soon emerged that the cases had been entirely made up by the chatbot.

U.S. District Judge P. Kevin Castel on Thursday ordered lawyers Steven Schwartz and Peter LoDuca, who took over the case from his co-worker, and their law firm Levidow, Levidow & Oberman, to pay a $5,000 fine.

The judge said the lawyers had made “acts of conscious avoidance and false and misleading statements to the court,” adding that they had “abandoned their responsibilities” by submitting the A.I.-written brief before standing by “the fake opinions after judicial orders called their existence into question.”

Castel continued: “Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The court’s time is taken from other important endeavors.”

The judge added that the lawyers’ action “promotes cynicism about the legal profession and the American judicial system.”

The Manhattan law firm said it “respectfully” disagreed with the court’s opinion, describing it as a “good faith mistake.”

At a related court hearing earlier this month, Schwartz said he wanted to “sincerely apologize” for what had happened, explaining that he thought he was using a search engine and had no idea that the AI tool could produce untruths. He said he “deeply regretted” his actions,” adding: “I suffered both professionally and personally [because of] the widespread publicity this issue has generated. I am both embarrassed, humiliated and extremely remorseful.”

The incident was linked to a case taken up by the law firm involving a passenger who sued Columbian airline Avianca after claiming he suffered an injury on a flight to New York City.

Avianca asked the judge to throw the case out, so the passenger’s legal team compiled a brief citing six similar cases in a bid to persuade the judge to let their client’s case proceed. Schwartz found those cases by asking ChatGPT, but he failed to check the authenticity of the results. Avianca’s legal team raised the alarm when it said it couldn’t locate the cases contained in the brief.

In a separate order on Thursday, the judge granted Avianca’s motion to dismiss the suit against it, bringing the whole sorry episode to a close.

ChatGPT and other chatbots like it have gained much attention in recent months due to their ability to converse in a human-like way and skillfully perform a growing range of text-based tasks. But they’re also known to make things up and present it as if it’s real. It’s so prevalent that there’s even a term for it: “hallucinating.”

Those working on the generative AI tools are exploring ways to reduce hallucinations, but until then users are advised to carefully check any “facts” that the chatbots spit out.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more