Legal AI Might Be Accurate... And Still Not *Right* - Above the Law
Briefly

Legal AI Might Be Accurate... And Still Not *Right* - Above the Law
"Everyone knows about hallucinations. Well, apparently not , which is why hallucinations provide so much amusement. Lawyers keep putting them into their briefs and, sometimes, lying about it when caught. Judges are even getting in on the action with hallucinations of their own. The plague of hallucinations remains the most discussed AI threat for lawyers. But one AI weakspot that gets almost no attention - despite being arguably more dangerous - is the case where AI is both perfectly accurate and fundamentally incomplete."
"But incompleteness arises under a whole different set of circumstances. It's one thing to search a few hundred cases for helpful precedent, and another to scour millions of documents to make sure there's nothing harmful in there. This is work that humans simply can't manage on their own and there's no equivalent to cite-checking when the whole assignment is to "prove a negative." If AI set to that task misses a document, it's an " unknown unknowns.""
Hallucinations are a well-known AI failure that humans can often catch through cite-checking, but completeness failures present a deeper danger. AI can accurately summarize or retrieve information yet still miss critical documents hidden among millions of records, producing unknown unknowns that humans cannot reliably detect. Missing obscure prior art can produce multimillion-dollar consequences in patent litigation. Specialized patent analytics and search tools are required to surface buried prior art across global filings, machine-translated foreign documents, academic papers, and technical manuals because humans alone cannot feasibly scour the entire corpus.
Read at Above the Law
Unable to calculate read time
[
|
]