
"It has everyone scratching their heads. I mean, everyone knows the AI systems will do this, so why does it keep happening? A new Cornell University study and paper sheds some light on this, the problem of overreliance, and why the volcano of serious AI flaws may be about to erupt. Quite simply, the cost of verifying the results of the AI tools exceeds any savings from their use. It's a paradox."
"The Assumptions As pointed out in the study, the assumption fueling the explosion of AI use in legal is that will save gobs of time. This savings will inure to the benefit of lawyers and clients, will lead to fairer methods of billing like alternative fee structures, will get better results, improve access to justice, and lead to world peace. Well, maybe even the vendors would not go so far as to guarantee the last one. But vendors do seem"
AI research tools can produce fabricated cases and unreliable outputs that create serious legal risks when used without adequate verification. The cost of confirming AI-generated results often exceeds the time and money saved by using the tools, creating a paradox where automation increases workload. AI systems suffer from reality and transparency flaws that make blind trust dangerous. Vendors and pundits frequently overpromise capabilities, and law firms sometimes adopt complex systems they do not fully understand. The combination of verification costs, fundamental flaws, and high stakes in legal practice could significantly limit AI's practical role in law.
Read at Above the Law
Unable to calculate read time
Collection
[
|
...
]