Judge slams lawyers for 'bogus AI-generated research'
Briefly

A California judge condemned two law firms for their undisclosed use of AI, which resulted in numerous inaccuracies in a legal brief. Judge Michael Wilner imposed $31,000 in sanctions, emphasizing that competent attorneys should not rely on AI for research or writing. The issue arose when a civil lawsuit's legal representative generated an outline using AI, which contained false references. Judge Wilner discovered these errors could have led to incorporating misleading information in a judicial order, demonstrating the dangers of unverified AI-generated content in legal contexts.
As noted in the filing, a plaintiff's legal representative for a civil lawsuit against State Farm used AI to generate an outline for a supplemental brief. However, this outline contained "bogus AI-generated research" when it was sent to a separate law firm, K&L Gates, which added the information to a brief.
I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them - only to find that they didn't exist.
Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying "no reasonably competent attorney should outsource research and writing" to AI.
That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order.
Read at The Verge
[
|
]