
"Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it's detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars' review found, with many instead offering excuses that no judge found credible."
"Since 2023-when fake AI citations started being publicized-the most popular excuse has been that the lawyer didn't know AI was used to draft a filing. Sometimes that means arguing that you didn't realize you were using AI, as in the case of a California lawyer who got stung by Google's AI Overviews, which he claimed he took for typical Google search results. Most often, lawyers using this excuse tend to blame an underling, but clients have been blamed, too. A Texas lawyer this month was sanctioned after deflecting so much that the court had to eventually put his client on the stand after he revealed she played a significant role in drafting the aberrant filing."
An 'epidemic' of fake AI-generated case citations is clogging court dockets and triggering sanctions. A database compiled by a French lawyer and AI researcher documented 23 cases where lawyers were sanctioned for AI hallucinations. Judges advised that admitting AI use promptly, showing humility, self-reporting to legal associations, and taking AI-and-law classes can reduce penalties. Many lawyers instead offered excuses judges found not credible, with some judged to have lied about AI use. Common deflections include claiming ignorance of AI involvement, blaming underlings or clients, and feigning unawareness of chatbots' propensity to hallucinate facts.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]