Judges Admit The Obvious, Concede AI Used For Hallucinated Opinions - Above the Law
Briefly

Judges Admit The Obvious, Concede AI Used For Hallucinated Opinions - Above the Law
"The judges had previously branded their wrong and subsequently withdrawn opinions as clerical errors. That lack of transparency undermined the judges' credibility, but both seem to have used the "clerical" excuse in a good faith effort to avoid throwing interns under the bus. According to Judge Neals, a law school intern performed legal research with ChatGPT, while Judge Wingate writes that a law clerk used Perplexity."
"The judges explain that they have procedures to avoid this in the future, including Judge Wingate unnecessarily wastefully having cases physically printed out to rule out error. This feels a lot like promising to still use the Shepardizing books after the advent of online research, but Grassley was alive when Bonnie and Clyde were still around so overkill is probably a prudent way of keeping him satisfied."
Judge Julien Neals and Judge Henry Wingate issued draft opinions that contained fabricated citations produced by AI tools; both opinions were withdrawn. Judges reported that a law school intern used ChatGPT for research in Neals’s case and a law clerk used Perplexity in Wingate’s. Both judges characterized the erroneous opinions as clerical errors and said the drafts were not yet finalized when filed. Lack of transparency about the causes undermined credibility, though judges said measures will prevent recurrence, including more stringent cite-checking and Wingate's practice of printing case materials. No confidential information was compromised.
Read at Above the Law
Unable to calculate read time
[
|
]