
"Since the release of ChatGPT in late 2022, the frequency of court submissions riddled with AI-hallucinated gibberish has increased exponentially. Now, more than three years later, it seems that not a week goes by without a headline about yet another case in which a lawyer has submitted briefs to the court full of AI-hallucinated gibberish."
"One of the standout features of many of these cases is that the attorneys double down, rather than admitting the error of their ways. Sometimes, they even submit responsive papers in defense of their actions that include hallucinations."
"The Court: You acknowledged it in your reply brief that some of the cases were not accurate. Counsel: The issue that I believe is really important is the fact th[at]... Rather than answering pointed questions about the errors, he deflected and tried to avoid the topic entirely."
Since ChatGPT's release in late 2022, court submissions containing AI-generated false information have increased exponentially. Attorneys frequently submit briefs riddled with hallucinated case citations and fabricated legal references. Notably, many lawyers compound the problem by doubling down on their errors, sometimes including additional hallucinations in responsive papers defending their actions. A striking example occurred during an October 2025 appellate oral argument in Deutsche Bank National Trust Company v. Jean LeTennier before the New York Appellate Division, Third Department, where counsel faced a visibly frustrated bench regarding numerous hallucinations in submissions. When confronted about acknowledged errors in his reply brief, the attorney deflected and denied responsibility, demonstrating apparent indifference to the false citations and fabricated legal authorities presented to the court.
#ai-hallucinations-in-legal-practice #court-submissions-and-ethics #chatgpt-misuse #attorney-accountability
Read at Above the Law
Unable to calculate read time
Collection
[
|
...
]