Has AI Managed To Make Lawyers Even Dumber? - Above the Law
Briefly

Has AI Managed To Make Lawyers Even Dumber? - Above the Law
"Artificial intelligence has, in turn, delivered the AI hallucination. Lawyers do it, judges do it, our clients do it. When Mata v. Avianca came down - the ur text of lawyer AI hallucination screw ups - we defended the technology against critics, stressing that the problem in this case remained fundamentally human. It shouldn't matter where the fake cite comes from... lawyers have an obligation to check their filings for accuracy."
"If the court calls you out for citing fake cases in a filing, one thing you can try is insisting that they exist and gaslight the judge by saying that you attached them.It won't work very well, but you can try it. pic.twitter.com/BOUCZB4rX1- Rob Freund (@RobertFreundLaw) October 15, 2025 Christine Lemmer-Webber described generative AI as Mansplaining As A Service, and I don't know if an AI tool suggested telling the judge that there were attached cases that weren't attached, but it raises the mansplaining bar."
Technological progress brings new forms of accidents, and artificial intelligence has produced AI hallucinations that fabricate authority and citations. Lawyers, judges, and clients all experience these hallucinations, leading to notable failures such as fabricated cites in litigation. Some defenders claim the root problem remains human and emphasize professional obligations to verify filings. Repeated industry embarrassments raise questions about whether stronger enforcement or changes to the technology are required. Instances of defendants insisting fabricated cases were attached and characterizations of generative AI as "Mansplaining As A Service" illustrate both practical and cultural frustrations with the phenomenon.
Read at Above the Law
Unable to calculate read time
[
|
]